Menu Icon
< back to main
 • 14 min read

Scanner versus Human Logic

Let’s dig a bit deeper in the differences between manual work, and the use of a scanning tool.

Scanner versus Human Logic
Herman Stevens
Herman Stevens

After a long career as an information security professional, Herman is now director of Astyran Pte Ltd (Singapore). At Astyran, Herman leads a team to help assess applications for security through standards compliant vulnerability assessments, helps companies build secure applications by performing secure design and secure code reviews, helps businesses building security into their software development lifecycle (SDLC), trains development teams, and make them aware of common security defects and how to avoid them.

Want to see the platform in action?
get a demoArrow Right
Want to see the platform in action?
get a demoArrow Right

scannervshuman 1

A few months ago I was invited to work on an application security assessment (standard web application vulnerability assessment and manual code review) of a large Java application comprised of millions of lines of code on top of a commercial application server. The development company — my customer — was obliged contractually by their customer to perform an independent assessment of the security posture of the new version of this application. I was invited only two weeks before the new version was planned to go live.

The developers were very confident that I would not find much, after all, they had already implemented several security measures in their software development lifecycle (SDLC):

  • For more than six years, a fully automated scan (using a commercial web application scanner) at the application level, was performed on the application in development with each new build. All, even low rated issues, were fixed immediately.

  • Since one year ago, all code was also scanned with a commercial source code analyzer. Again, the developers told me that all issues were fixed.

I had my doubts and they allowed me to run an independent scan with the same commercial tools. While these scan were ongoing, I started doing a “manual” assessment using my favorite tool Burp Pro. I do use the automated scanning capabilities of Burp, but monitor the output closely and guide the scanner or perform additional manual tests when I suspect that there is a security issue.

Two hours later, I had listed tens of critical issues, basically the whole OWASP Top Ten was present, you name it: Cross-Site Scripting, SQL Injection, XML Injection, Cross-Site-Request Forgery, Unvalidated Re-directs, Malicious File Upload …

What happened? I looked at the output of the commercial web application scanner: not one issue detected! The static code analyzer, apart from the usual 4,000 or so false positives, did detect some of the issues.

I dug deeper into the source code: the application had a centralized component to perform input validation which in itself is a very good design decision. Every signature of the web application scanner was black-listed! Imagine black-listing of alert(, 1 AND 1, example.com, ...

No wonder the poor scanner could not detect anything wrong with the application. Do not blame the development team however, since it is quite common that, once an application enters the maintenance phase, fixes need to be implemented for security and common bugs and new features are implemented based on change requests from the customer or product owner.

Bugs are fixed withing the maintenance budget, while new features are developed using a new budget. Pressure is put on the development team to use as little as possible from the maintenance budget, hence bugs are “fixed” in the cheapest and fasted way possible.

SDLC: Never let the development team decide whether or not a security fix is in place. This must be assessed by someone independent of the development organization.

Additional problem: whenever the application detected a black-listed pattern in the input you were forcibly logged out. Although the scanner offered the possibility to detect when a session was ended, this was not configured by the development team, and I did not do this either in my first run.

The resulting report showed no issues, but the scanner had not really run any real scans.

Scanners are advanced and powerful tools. Ideally, they should be tuned to the application. Monitor the scanner during a run and when needed improve the configuration.

Let’s dig a bit deeper in the differences between manual work, and the use of a scanning tool.

Types

Wikipedia gives a decent overview of Web Application scanners. These are tools that are directed to a web application in order to find typical web application level vulnerabilities such as SQL Injection (SQLi), Cross-Site-Scripting (XSS), Cross Site Request Forgery (CSRF) or Insecure Direct Object References (IDOR).

A more formal overview of Web Application vulnerabilities is given by the Common Weakness Enumeration (CWE) and a formal overview of attack methods can be found at the Common Attack Pattern Enumeration and Classification (CAPEC). The Open Web Application Security Project (OWASP) provides the Testing Guide offering a detailed approach on how to test for these vulnerabilities.

The OWASP Testing Guide contains more than 200 pages. As a beginning bug bounty hunter you have a lot to learn. Can tools help, do they mis vulnerabilities?

Basically there are two different types of scanners, or better, two different types of usage:

  • A “fire-and-forget” scanner such as Arachni. Just point the scanner to the web-application and collect the report later.

  • Scanners which require manual tweaking and configuration, such as OWASP ZAP and Burp Suite Professional.

Many more public domain or commercial tools are available. The problem is that even the scanners that require manual tweaking and configuration are often run in a “fire-and-forget” mode.

Coverage

Usually I start an assessment by exploring the application, signing on with different roles, filling all forms, and do some quick manual tests to get an idea of the protective mechanisms in place. For each different role I surf the complete application or at least what the interface shows me. Afterwards I use the spider functionality of the scanner to discover even more.

Is this enough to get an idea of the attack surface of the application? Can a tool give me the complete attack surface without me clicking on every link?

Speaking from experience, a “fire-and-forget” scanner will fail miserably. The tool has no knowledge of what is expected in the form-fields, might run undetected into rate-limiting situations, might be logged out, might even have troubles getting the correct anti-CSRF token, … Yes, it is possible to configure most scanners with some default values to enter in the form-fields, but this is a lot of work. While you are doing this, the other bug bounty hunters already have found most vulnerabilities.

Using Burp Pro or OWASP ZAP is slightly better: scans will take into account what you filled in earlier into the form fields. However, they still don’t know anything about the context of the application and will still fail to find everything. Multiple forms that must be filled in one by one are particularly challenging.

Even if you manage to manually browse the complete application, as available in the GUI, it is still possible to miss large parts of the attack surface of the application. Modern applications use tens of thousands of lines of client side JavaScript. It is necessary to look at those code, to detect unknown routes in the application (e.g. a /superAdmin route or the potential use of a debug parameter or cookie).

Even then, some routes might still be hidden. To complete the assignment, brute-forcing of popular routes might be necessary. Here the scanner will speed up this process.

Are we there yet? No, an application can misbehave (e.g. requesting a route with an .html extension instead of .json might force the output into text/html and suddenly make XSS a possibility. Did you test for all HTTP methods such as PUT, PATCH, DELETE? Most spiders will not...

An few months ago I worked on a private bounty for Cobalt together with another bounty hunter. The goal was simple: we got a user-id and password, but not the 2FA token to the application. Could we:

  1. Access the interface behind the 2FA (without accessing the data of the user)?
  2. Get access to the data too?

Since we only had a login-form, I performed the usual manual and automated tests for XSS, SQLi, etc. I tried some popular paths (/user, /admin, /patients, /records...) but none seemed to work. We knew that the 2FA worked similarly to Google: If you login with a new browser or from a new location, you needed the 2FA token. Since we did not know the last location or browser used by our test login, I ran some very extensive tests with thousands of different user-agents. None worked. I thought about calling it a day and looked at my Burp session log. To my surprise, the full routes to the interface were present! I tried the same routes through my browser: access refused. I scratched my head and did some more research. The routes were detected by spidering the application! The reason was simple: the Burp spider (and most other spiders) uses the HTTP Accept Header with the value */* while using the browser results in a value such as text/html. This was not fore-seen by the application. The first one was not blocked by the 2FA, the second value was blocked.

In reality it takes time to do a thorough test. When I am on a bug-bounty, time is important otherwise there will be no reward. At the start of a bounty I usually do a quick manual review in the hope I can file a report before anyone else. When that fails, I try to delve deeper using automated tools and some grey matter.

Humans need automation to help exploring a large or complex application. Scanners might help but often miss large parts of the attack surface. Manual review of client-side JavaScript is needed, as might some brute-forcing of popular routes.

Environments

You might work on an application in production, in a UAT (user acceptance testing) or development environment, each of those might present their own difficulties when using an automated scanner. Some examples as experienced by me or told to me by other researchers:

  • A scan on a production system managed to delete all administrative accounts. The application needed to be re-installed from scratch to re-instate the admin users. The service was down for sixteen hours.

  • A scan on a system in UAT resulted in thousands of text-messages send by the alerting system. Due to some networking issue the system in UAT was connected to the production security monitoring and alerting service. Two days later the security team were still receiving text messages on their mobile phones. Note that this could have resulted in the team missing a real attack.

  • A scan hit the contact form on a public web-site. This was connected to a large CRM (customer relationship management) application. Thousands of new issues were created automatically. The CRM system however did not have the possibility to delete those fake issues in bulk. A very unhappy support team had to work several hours deleting the issues one by one.

Never run a scan when having administrative access or at least think twice. Carefully consider potential consequences to back-end servers.

Understanding of Context and Design

Not all security issues are technical issues. A scanner does not really know the context of the application, what is important information or not. Design errors might go unnoticed.

A customer had an application to test the knowledge of their large engineering team. The application would present 100 questions, the engineer would be bored and this score was taken into consideration for the yearly bonus.

The application consisted of a large Flash application. At the start of the process an XML file was sent to this client side component. This XML file contained the 100 questions, but also the 100 answers. It was thus trivial for the savvy engineers to have high scores. No automated scan would have detected this.

At the end of the test, the score was send to the server. The request contained the user-id and the score. It was possible to tamper with the score and also with the user-id. Malicious engineers could give them-selves a high score or give opponents a low score. Again, an automated scan would not have detected this.

Security issues can be caused by errors in implementation or errors in design or are heavily dependent on the context. Design or context related issues are typically not detected by automated scans.

Authorization

Authorization issues are typically very time-consuming to test manually. Automation would be very beneficial, but unfortunately the current tools are woefully inadequate.

Being able to do this in this automatically would be very beneficial, but unfortunately, scanners require extensive manual configuration to detect only some authorization issues.

Think about a IDOR situation: You have some parameter in the request that is linked directly to the user-id. Since the scanner does not know anything about the context, it is impossible for the scanner to detect that you succeeded in requesting an object of another user.

Again, manual review is necessary.

I usually use the Burp “Session Compare” functionality to detect functions that were only available to certain roles. However, this might be insufficient, and some testing might require a lot of scripting, especially when anti-CSRF tokens are used.

Authorization testing is time-consuming. Automated scanners only offer limited help.

Framework or Library Weaknesses

Another item that an automated scanner might not detect are weaknesses in some of the frameworks or extensions used. Although our favorite manual tool Burp includes the extension RetireJS this is far from complete. A typical installation might include hundreds of externally downloaded functions, scripts, libraries or components.

An interesting example of this is the popular jQuery extension DataTables. Older versions of this extensions included vulnerable Flash components in the TableTools library. These Flash components were nothing more than renamed versions of the ZeroClipboard Flash component. ZeroClipboard has had its fair share of XSS (Cross-Site-scripting) issues.

The application I tested did have the latest DataTables extension. I still went to Github to see if there were any open issues. I detected that older versions used the renamed ZeroClipboard component and indeed, although the library itself was upgraded, the developers did not remove the vulnerable Flash components. Instant XSS!

My current methodology includes a review of all client-side JavaScript for potential vulnerable components or libraries. Creating your own list of interesting file-names and common directory structures migh enable to automate this.

Review all components in use. Old vulnerabilities might still be present or newer issues might not be fixed yet.

Developer backdoors

A special case are developer backdoors. In most cases, they can only be detected by reviewing the server-side code. Automated static code analysers will not detect this, but might offer some assistance by scanning the code for terms such as password.

A sensitive application I reviewed used triple factor authentication: user-id and password, token and pin and finally finger-printing. Very early in the review, I detected that entering “xxx” in user-id and password fields, would bypass the secondary and tertiary authentication steps and would enable administrative access. This was implemented to enable remote support, but was unknown to the customer!

A web application scanner would never detect this. But I have one example where a backdoor was detected by simply looking at the client-side JavaScript. Most modern applications will reveal routes that implement functionality in the application. By looking at the client-side code I noticed that a route was only accessible (and shown in the interface) when the user-id was superadmin. This is of course a design error, there should be no reason for client-side code to reveal this. A quick call to the hidden route, and tampering with some cookie variables, I was in as super administrator. This function was not really a backdoor, but was implemented to support installation.

Detecting backdoors without having access to the code is nearly impossible. Automated scanners are not really helpful.

Conclusion

In reality, every bug bounty researcher will run automated scans or at least use some automation to help cope with the size of an application. This is absolutely necessary when you want a full review of the security posture of an application. Manual tests in a limited time-frame might not offer the full picture.

It helps of course when multiple bounty hunters are working on the same application; the final coverage might be better.

There might be plenty of reasons for a customer to not allow automated scans or for a researcher to act cautiously using automation. The question whether or not a scanner can provide better results than a single researcher might not be answerable in black and white terms, but will be described in more than fifty shades of grey.

What are your success — or horror stories when using automated scans? Let us know! Twitter.com/cobalt_io

Also, visit our PtaaS overview to learn more about penetration tesitng with Cobalt's Pentest as a Service (PtaaS) platform.

Related Stories

Cybersecurity Statistics for 2021
Cybersecurity Statistics for 2021
What's new in ransomware, social engineering, and many other security threats
Read moreArrow Right
The State of Pentesting 2021: Common Vulnerabilities, Findings, and Why Teams Struggle With Remediation
The State of Pentesting 2021: Common Vulnerabilities, Findings, and Why Teams Struggle With Remediation
Each year, we publish The State of Pentesting report to provide a detailed overview of vulnerabilities and identify the trends and hazards that impact the cybersecurity community.
Read moreArrow Right
How to Build Resilience in Cybersecurity: 4 Lessons Learned From Military Experience
How to Build Resilience in Cybersecurity: 4 Lessons Learned From Military Experience
What better group to turn to for advice than security leaders who have worked on the front lines of risk and uncertainty?
Read moreArrow Right
New Ebook: Beginner’s Guide to Compliance-Driven Pentesting
New Ebook: Beginner’s Guide to Compliance-Driven Pentesting
Find out more about the role of pentesting in your company’s compliance effort.
Read moreArrow Right

Never miss a story

Stay updated about Cobalt news as it happens