A question that comes up frequently in my capacity as a DevOps / Agile Security consultant is: how do we integrate security test automation into our environments. There is a lot of information on automated security testing tools, but unfortunately, the vast majority of it is written by the companies who produce and sell the tools. Their literature paints a very healthy picture of how all the tools are integrated. In September 2019, I attended a DevSecOps event in London in which a small number of application security testing vendors were invited along to talk about how they secure their own products. In the vast majority of the talks, the vendor explained how their products are used to protect their products! They painted a very rosy picture in describing how their tools are used by their development teams to secure their commercial offerings. Yet, life isn’t as simple as the way it was presented to us during the event. This article is part one of a series of articles I’ve composed to try to make sense of the tools out there. I start with Static Application Security Testing (SAST) tools in this post; in future posts, I will talk about Dynamic Application Security Testing (DAST), Interactive Application Security Testing (IAST), Software Composition Analysis (SCA) and finish with Container Scanning tools. Hopefully, these articles will help you make an informed choice on which tools you adopt as part of your DevOps / Agile delivery channels.
Analysing static code entails reviewing the source code of an application in a non-running state. SAST tools are designed to analyse uncompiled or decompiled source code for patterns that may indicate that there is a possible vulnerability within the application code. Some SAST tools interrogate the source code before it is compiled, others interrogate the source code within the compiled library. Put simply, the tools are designed to look for patterns in the code that hint of potential vulnerabilities within the application code. For example, they could discover evidence that a query string is being built by the code which may suggest an SQL injection vulnerability. Or it may notice that a particular string provided by an entity outside the source code file is written to a log file, which could be indicative of a log forging vulnerability.
In effect, SAST performs a security code review and outputs its findings so that software engineers can easily locate and fix the potential vulnerability. If the SAST tool is integrated into the Integrated Development Environment (IDE) such as Visual Studio, Eclipse or Atom, it provides an early warning to developers as the code is being written, giving them the opportunity to fix the problem before committing the code to a source code repository. This is often cited as an example shifting security left: performing security testing as the application is being developed, when it is quicker and cheaper to fix. SAST tools can also be easily integrated into the CI/CD pipeline as a check to identify vulnerabilities during the integrated build process.
Vendors of SAST tools extol the benefits of their products, particularly their ability to test early in the development lifecycle and their ability to identify the exact lines of code that need to be fixed. Many SAST tools offer up a description of each vulnerability it finds and examples of how to fix them. However, the biggest challenge facing engineers using these tools is the proliferation of false positive findings each time a scan is run. A false positive occurs when the scan identifies a potential security defect in the code that ultimately is confirmed not to be a vulnerability. The reason for the high number of false positives is due to the lack of context of each scan. For example, the potential log forging vulnerability mentioned above, involves a string being generated from user input that could be logged by an application logger. All user input is potentially vulnerable to manipulation by a bad actor. However, although the SAST tool correctly identifies this weakness, it doesn’t necessarily identify that the string is being handed off to another library that contains functions that sanitises the input to prevent string manipulation. In effect, SAST is not great at contextualising the source code, or understanding the flow of data during runtime. This can cause false positive fatigue: too many of them may lead to engineers ignoring the results of the scans, or simply turning them off.
Static code analysis is designed to perform scans against all the code in the target files. Therefore, vendors often cite code coverage as a key benefit of SAST. However, these statements need to be taken with an air of caution. The greater the code coverage, the greater the potential for recording false positives. The main reason for this is the lack of contextualisation mentioned earlier. SAST can cover 100% of the source code files, but doesn’t necessarily follow the data in the entire application, particularly if there are calls to external third party libraries or services. In some cases, SAST may analyse code that is rarely used in a deployed system, or requires specific journeys to activate. Therefore, code coverage statistics can be a red herring when determining how effective SAST tools are.
Most SAST products are highly configurable. Unfortunately, the level of granularity that can be achieved through multiple configuration settings creates its own challenges. For example, within organisations supporting many different products, scalability is problematic if configuration settings are centralised across all applications. Nevertheless, when SAST is set up correctly, it can be very effective at spotting simple logical errors in the code that lead to vulnerable applications.
Unfortunately, I often see SAST tools being misused by organisations and ultimately, I see their output being mis-managed, particularly by an organisation’s security team. The security defects end up being recorded in the vulnerability management database and tracked until the issue is resolved. The problem with this approach is that thousands of defects may be recorded, and, as we have seen above, many of them could be false positives. Over time, the number of security defects can become unmanageable. In these situations, I usually suggest that security teams work with the software engineers to identify the most prolific types of vulnerabilities and focus on educating engineers on how to avoid introducing the issues in the first place. Fixes can be asserted through unit tests written by the engineers while the prevalence of such issues can be monitored over time within the SAST reporting dashboard.
So what is the verdict on SAST? Should you use SAST in your organisation? I think SAST is a great starting point for understanding the scope of vulnerabilities within your applications. They can help you identify families of security defects within your products allowing you to establish a targeted education campaign to reduce their frequency or eliminate them entirely. Be wary of introducing SAST as a mechanism to recording vulnerabilities in your products. The high number of false positives combined with the lack of contextualisation can lead to misuse of SAST tools. But if introduced correctly, they act as a great code review tool to help engineers identify potential risks within their code and address them during development.
There are other tools that complement SAST, and in part two of this series I will focus on Dynamic Application Security Testing (DAST).