It then follows that the faster you can measure, the faster you can improve, and when the aspect that you're seeking to improve is as important to the business as cyber security, you definitely want to get ahead of the game.
In cyber security those measures show the issues and risk, as well as how effective security programs are at removing those risks. Aspects we want to measure include:
• Simply numbers of issues, typically recorded at differing levels, such as how many High vs Medium.
• Risk or impact of those issues.
• How many of those issues are introduced, or fixed (removed) in each cycle.
• Mean time to fix for issues, e.g. how long does it typically take to fix the most important issues.
The act of identifying these metrics above, across many software systems in an organisation, can itself be time consuming if not automatically tracked with existing tooling. However, the frequency and consistency of testing drives directly to the validity of those metrics. If these metrics come from manual penetration testing, executed every 6 months, then it can take 12-18 months to identify any trends that will then inform changes to security strategy.
As we have seen in the cyber security industry, 12-18 months can be a very long time. If systemic problems in a cyber security program run for this long, it will result in large amounts of issues and risks residing in the organization. This then requires extra resources and programs to remove this backlog of issues and risk, all while the source problems are continuing to exist.
Wrapping security into DevOps and Agile environments gives much more frequent touch points. Security issues can be found, or shown to have been fixed, in daily or weekly cycles. This obviously has the advantage of showing the current (to the day/week) status of the state of security, however it also allows quick and agile decisions to be made in a security program.
Combining this frequent measurement, as part of the act of security testing, with simple and efficient reporting, results in stakeholders having simple and easy to use information. Auditors can see regulatory compliance and progress of security programs, without becoming information security experts. C-level can be given reports showing week-by-week progress on issue numbers, risk levels, and other performance metrics.
When you introduce new training, new tooling, procedures, etc, you can have feedback in a matter of weeks as to the effect they are having. This allows you to efficiently evaluate if and how they are working. For example, if it's a trial of a new tool, you can know in 4-5 weeks if it's going to produce a return on investment.
The analysis firm Forrester suggests Application Security metrics should also inform strategy. For example, if the last 2 months have shown a growth in SQL Injection issues, training, tooling, or software level protections can be introduced to protect the organisation swiftly.
The Uleska Platform works within Agile and DevOps software environments to automate and orchestrate technical security testing, whilst recording the metrics to inform on risk and performance of security programs. Charts are automatically generated base on actual results of security testing, and trends can be observed within weeks.
Security performance can be viewed across teams and departments, including external or third-party suppliers, to identify the levels of security issues or risks coming from those sources. This means management can pinpoint what security program, or tools, etc, are needed for individual suppliers or teams, and react accordingly.