Like in most professional sectors, penetration testers ask themselves whether machines are capable of taking over their job. In many sales speechs, customers are saying that you no longer need manual web security assessments because automated tools will the same job cheaper. Customers refer to so-called Web Application Security Scanners. These are programs that automatically scan web applications for security vulnerabilities. In the post, we will review the effectiveness and accuracy of web application scanner.
A few years ago a small webshop owner had his application tested by a service provider. He selected the cheapest variant “automated web penetration test”. Then he received a report with some vulnerabilities which he fixed. The shop owner felt safe. However at some point he noticed irregularities in the prices in the backend. He therefore decided to hire another consulting company that performed a manual web security audit. They discovered, among other things, that it was possible to change the prices of products, if a customer placed items in the shopping cart. In the corresponding HTTP POST request the price of the article could be changed. So many orders were made at low prices. In addition, various other vulnerabilities were identified in the manual security audit.
Back in 2010 a great paper was released: “Why Johnny Can’t Pentest: An Analysis of Black-box Web Vulnerability Scanners” . This paper tried to identify how accurate web security scanners could be. For this, the authors built a purposely vulnerable web application, called WackoPicko . The goal was to run various renowned web security scanners against the vulnerable web application and to compare the results. Back then, we could already see some good results from the scanners, but there were also some vulnerabilities which were simply not found.
Seven years have passed since that benchmark, which is a long time in the IT realm. A question came to my mind: “What if I reproduce the benchmark with actual tools? Would the results be better?”. A valid question in those days of rapid technical development, maybe the maturity of scanners has reached the point where they could be compared with manual test as some vendors claim to be. To answer my questions, I reproduced the test setup.
For this purpose the WackoPicko testsuite was set up on a virtual machine and a snapshot was created. After each scan the virtual machine was reset to the snapshot state. This ensured that the starting position is always the same. Unfortunately, not all major web vulnerability scanners will be part of this benchmark since the vendor didn’t get back to us with testing licences.
This blog entry will try to answer the following question: do we still need manual web security assessments in 2017?
When comparing the result of the new benchmark and those from 2010, it is quite clear that the results have not improved much. Only the built-in command injection was found by almost every vulnerability scanner. Other vulnerabilities remain missing.
Comparison table 
Automated tools limitations
Out of this new benchmark, I try to understand why scanner could not find all vulnerabilities. The two main reasons are workflow logic and undefined permissions. Here are two example to illustrate the limitations.
Stored SQL Injection
The WackoPicko test application contains a registration form. There is the possibility to store SQL statements in the “First Name” field. These are executed when the user accesses the page “similar.php”.
There are two steps to execute in the right order to identify (and exploit) the SQL injection: First, the web vulnerability scanners would have to create an account and store the SQL statement. Then the page “similar.php” has to be accessed while being logged in as the created account and then a web vulnerability scanner could identify the “possible SQL injection” based on the server response and the existing SQL stacktrace. The entire process is too complex for the web vulnerability scanners tested in our benchmark.
Just like in 2010, no scanner was able to find this vulnerability.
Insecure Direct Object Reference
Insecure Direct Object Reference are typical vulnerability that web scanner struggle to find. This is because scanner are not aware that the data found was actually restricted. This could be flagged as an issue only if the tools were aware of who is supposed to access what. While an assessor should identify that naturally, an automated tool has no clue about it.
These examples clearly illustrate the greatest limitation is the lack of human intelligence. There are simply too many vulnerabilities which requires a certain logic. This cannot be achieved by web vulnerability scanners at the moment.
When using web vulnerability scanners, it should always be considered that these are not able to find certain types of weaknesses. This simply results from the non-existent logic. Thus web vulnerability scanners should be used to support a web penetration test, instead of being used alone. This is especially useful for larger applications. In very large applications the penetration tester is not able to test the entire application. However, it should always be considered: With the present technology available, a web vulnerability scanner is not yet capable of replacing a manual security audit from a professional penetration tester.
Another problem is the interpretation of the report. Customers buy a web vulnerability scanner to test their products and applications. However some of these customers are not able to interpret the results at all. It may happen that the scanner says “possible SQL injection found”. The inexperienced user probably does not see this as a problem. An experienced penetration tester, however, can use this information and the corresponding requests and verify that it is really an exploitable vulnerability. The user of the web vulnerability scanner should be aware that, without any expertise in this field, it can be very difficult to interpret the results of a scanner and consider whether the findings are problematic or not.
Commercial vs open-source software
The evaluation we carried out internally includes both commercial products and open-source tools. The open-source tool Arachni produced results, which are in no way inferior to the results of the commercial software. However, the tested commercial software seems to be easier to install and to operate with usually a well-structured interface compared to the Arachni web vulnerability scanner, as Arachni is not so intuitive. Especially its configuration, which requires some time to get acquainted with. The same goes for the reports.
The commercial software scores with detailed and mostly well-comprehensible results. Arachni presents vulnerabilities rather factual.
In my opinion, open-source web vulnerability scanners can be used by experienced testers. A good use would be a penetration test where the tester would also like to use an automatic web vulnerability scanner to achieve a high coverage. In contrast, inexperienced users should prefer to purchase commercial software. But I do not want to make specific purchase recommendations. Here it is advised to simply install and test the demos of the vendors.
At this point I do not need to worry about my job as a penetration tester. Even after seven years, there are vulnerabilities that cannot be found by web vulnerability scanners and I don’t expect any huge changes in the near future. For those who want to test the security of their web application, I would still recommend a web vulnerability scanner as an initial check together with a manual penetration test.
Man still beats machine.