Whether it be from security tooling, or from manual penetration testing, issues can be reported that are either so minor that they don’t pose any risk, or they are just plain incorrect, and no actual vulnerability exists. False positives are cited as one of the major annoyances, and factors, used by teams purchasing security tooling.
You can get into heated debates about the topic of false positives, and there’s a great in-depth article over at https://dzone.com/articles/the-curious-case-of-false-positives-in-application that gets mathematical on the subject. However, in all security programs, dealing with false positives becomes another part of the overall security scale program.
Before we move on, there’s another aspect to false positives that we at Uleska term ‘Regulatory False Positives’, which are real exploitable issues, but not ones that affect personal, financial, or other data types and thus should not be reported as high a priority as others. This is important when you’re racing through hundreds or thousands of issues.
With DevOps Security, and DevSecOps growing in popularity, the need to wrap automated security tooling into the SDLC is important. However, it then raises the question of how we deal with false positives in an automated cycle. Stopping builds or releases due to false-positive issues being flagged up, will not win anyone any friends. Yet we still need to be alerted to those real issues ASAP.
It really depends on the type of security tooling you’re going to use. There are 3 broad types of security tooling. Firstly, there are the really good commercial tools out there. They have SAST, DAST, or IAST scanning engines that find issues, and typically have a user interface where reports can be viewed, and issues can be edited. This editing of issues can, but not always, include commenting on an issue, deleting it, marking it as a false positive, generating a report, etc. This is great for a one-off security test.
However in the world of DevSecOps, security testing is continual - perhaps every build, or every day, or every week. Having to reset, or re-classify, the same issues as false positives, over and over again, becomes a major blocker to an efficient security program. Some tools will support this, some won’t. Plus with many companies moving to multiple commercial tooling, this then means that teams are needing to set different false positives, in different tools, in different ways.
The second type of security tool is the vast amount of open source security tools, including OWASP tools, Kali Linux, the mass of great stuff on GitHib, etc. These tools may include a decent user interface, however typically they are not to the same extent as the commercial tools, and often they are basic - such as command-line tools that pump issues to stdout. These tools typically don’t have ways of handling false positives. They do their (great) job of finding the issues and reporting them, and that’s their jobs done. When they are wrapped into a DevSecOps environment, they will report the same issues over and over again. When valid issues are fixed, they will be removed, however as false positives, by definition, will not be fixed, they will continue to be reported.
So why use open source tools rather than commercial ones? Apart from the obvious budget point, the breadth of open source tools can simply find more issues than commercial tools. All scanning engines are different and handle different languages, frameworks, etc differently. The needed coverage in today's security environments means multiple tools are needed to cover all areas of software, not just the range of AppSec issues, but also cloud, container, network, microservices, API, and other types of issues.
The last group of test tools are the custom, bespoke ones. The scripts that test something an ‘off-the-shelf’ tool would never check for. Usually, this is business logic specific to one piece of software, for example, can the US team view/edit the European accounts. It could also be down to a security program of a company, such as can passwords be set on software that’s less than 10 characters long, or is port 4545 open on any of the boxes?
By their nature, these custom scripts are really only coded security checks, that will find an issue, or conduct a check, and typically return a result. They will not have a fancy UI or false-positive interface. They are extremely important in a DevSecOps environment but can add to the noise.
The Uleska Platform is a centralised orchestration and automation platform that fits into CI/CD & DevOps to run the required security tooling. It integrates with commercial, open-source, and custom security tools and brings all the issues back into a centralised database. Each run of the security testing (typically for each build/deploy) is stored separately, yet it keeps a memory of what has happened before.
Therefore, if you run a bunch of tools, such as commercial tool X & Y, SQLMap, OWASP Zap, Clair, Nmap, and OWASP Dependency Checker for the first time against an application, they will return all their issues into one place. Some of these will be real issues, some will be false positives.
The Uleska UI allows you to quickly set any issue as a false positive. They are then not reported in the (main) UI or PDF reports, or against the risk of an application. They are still view-able in the UI if anyone wishes to re-instate them as real issues.
Better yet, as those issues will not be fixed, they will be reported in the next (and future) runs of those same security tools, however now the Uleska Platform will automatically remember them as false positives, and automatically not report them.
This means the Uleska Platform acts as a false positive remover for all tools you are using, regardless of whether they are commercial, open-source, or custom tools your own teams have created. This greatly reduces the manual overhead of dealing with false positives out of security testing and enables sensible security automation to be wrapped into DevSecOps.
The next step for the Uleska Platform is to automatically set which issues reported by tools are typically false positives. For example, you may not care about the password autocomplete issues typically returned, or some of the header issues flagged. These will be set as ‘likely false positives’ and not flagged up until someone actions them, again, vastly reducing the manual interactions needed during security testing.