What happened when non-skilled people security tested 1000 projects?

Can DevSecOps Open Security Testing To Everyone?

At Uleska we focus on moving security testing away from experts running manual tests, to automating security checks into existing processes. We also believe in continually testing ourselves on this mission, and so we asked a few people without software or security skills to test 1000 projects and feed the results back to the project teams.

Why? Here's a few reasons:

  • Many of our customers have security teams, or developers who are interested in security, working with our product. We want to push that boundary so anyone can security test.
  • There is a trend in many teams for security testing to be more transparent to developers, yet still get meaningful results.
  • We wanted to give something back to the open source community by alerting to any issues they may need to look at.
  • Many security tools have UI/UX aimed at security professionals, and we wanted to test our UI on the simplest ease-of-use we can find.

The Experiment

To that end, we setup the following experiment:

  • We choose 1000 Javascript projects on GitHub. Not the most popular ones, as they'll have had plenty of attention. Instead we chose projects that were recently updated, and had between 50 and 200 stars in rating.
  • We went on UpWork.com and paid non-professionals £5/hour to use the Uleska Platform to automatically run 3 Javascript source code security testing tools, and take the consolidated results generated by the platform to the project teams.  We made sure they did not have security or programming skills.
  • We didn't setup any of our cyber value-at-risk enumerations, as they would apply to how an open source library is used within the end user application (i.e. types of data processed, quantity of data, environment, etc).

Also in this version of the experiment we choose not to triage the results.  Why did we do this? The Uleska Platform has functionality to automatically remove false positives and duplicates, and we see this used heavily by our customers. Since this experiment was for open source projects, instead of enterprise software teams, we wanted to see how much of a pain (or not) reports containing a mix of real and false positive issues was for open source teams.

Some of the vulnerabilities found, split by the users and applications (projects)

The Outcomes

The experiment was great and give everyone insights, including some improvements we can make to our DevSecOps orchestration usage.  We ran into a few issues on GitHub (which we'll cover later). Some of the main learnings were:

  1. With simple instructions, non-skilled people were able to on-board hundreds of projects and, at the touch of a button, run a number of security tools easily, mainly due to the abstraction the Uleska Platform provided. Instead of starting any command lines or setting up profiles, this was just the click of a button.  This on-boarding/execution only took a few days.
  2. With running such operations frequently, we've discovered ways to speed up our own UI/UX, as well as our API, to make this even simpler, aiming for 2-3 clicks to setup a project/application test. We got feedback from the group as well. In this way, the team were using the Uleska Platform UI to kick off the testing, instead of any triggers from GIt or DevOps tools or continuous integration, however the effect would be the same.
  3. There were over 35,000 issues registered by the tooling, some of which were false positives, with others acknowledged by the project teams as issues to be fixed.  Around 10% of the projects tested didn't return any issues at all. As mentioned, our next iteration of this experiment will automatically remove the false positives.  We want to thank the projects for their kind words on raising issues to them.

Challenges with this Experiment

We did run into some logistical challenges with this experiment though. Firstly, we created new GitHub accounts to find/extract the GitHub URL for the project, to pass that into the Uleska Platform so the codeline can be tested.  These new GitHub accounts were also used to update the projects with the report of security issues.  This meant these GitHub accounts were no creating GitHub projects, or code, and so after a time they were flagged by GitHub, which meant they were no longer able to submit issue reports to projects.  For this reason we stopped short of the full 1000 projects, stopping around the 730 project mark.

We also had a few projects react negatively to being passed security reports out of the blue. Sometimes this was because the false positives weren't removed, other times our reports were perceived as spam.  We're sorry to any projects that felt that way and it definitely wasn't our intention. In our next iteration of this experiment we'll remove the likely false positives using the Uleska technology and look forward to helping more open source projects stay secure.

Tags:

Continue reading

Subscribe to our newsletter for great cyber security resources and news.

No spam!