The cyber security industry has evolved how it has measured the impact of technical security issues over the last 20 years. It’s needed to, as the discovery of a technical security issue is only half the story - the impact of that technical issue drives the priority, to the business, of addressing that issue.
Security issues, just like other technical and software bugs, can vary greatly in terms of their impact. With the increase in regulation, and costs of cyber breaches running into millions, the ability to effectively measure the impact, or cost, a security vulnerability could have to the business has grown in importance.
Let’s take an example. Let’s suggest an organisation has two SQL Injection flaws.
The first example is in an internal system, with only public data concerning meeting rooms, with only 2-5 users who are not technical and are vetted employees, in an organisation where the ability to coordinate meeting rooms is not critical to their core business. This SQL Injection flaw has a low likelihood of being exploited by the few non-technical employees, and even if they did, there is not much to gain on behalf of the attacker, and not much to lose on behalf of the organisation.
The second example is an external-facing system, with sensitive data in the remit of numerous compliance regulations, potentially open to any user on the internet, in a public organisation where the use of such data is vital to the operation of the business. Here you have a breach similar to the Talk Talk hack, which ended up costing over £60m to the business.
These two security issues had exactly the same technical flaw, an SQL Injection, however, the impact of the exploitation of that flaw is so much more important.
Common measurements for the impact of security issues include priorities such as Critical, High, Medium, and Low, or colour codes such as Red, Amber, Green. These measurements allow for easy identification and prioritisation of security issues. This ease of measurement is typically required for policy implementation, and due to the number of security issues that may be identified.
Prioritisation is important in many organisations due to:
The questions that can arise from High. Medium, Low, and Red, Amber, Green classifications, is what do they mean? What is a High level issue? Who decides if an issue is Medium or not, and how do they consistently do that?
The security industry then made popular use of the CVSS to provide a number, between 0.0 and 10.0, for security issues. This system is popular today and is a common lexicon for describing the impact of a technical vulnerability. An issue that is a 9.5 is much more impactful than a 2.4.
Many organisations can then use CVSS to map to High/Medium/Low and Red/Amber/Green classifications. For example, anything over a 7 might be a High, and anything over 3 could be a Medium. CVSS itself has some suggestions for this mapping.
CVSS brings more aspects of a security issue into its calculation. These include (for base):
This is great, as now we can say that one security issue has a greater impact than another, for example if it is easier to attack remotely, therefore has a much higher chance of being breached or could leak confidential information, as opposed to non-confidential information. Security issues are then evaluated by a security professional and assigned a CVSS number. This is good, if hard to scale, as this CVSS number would need to be provided for all issues before proper prioritisation of fixes can begin.
However CVSS does not consider some aspects of a technical security issue that are more vital to the actual impact, such as:
These are commonly lacking in cybersecurity tools. For example, how can a tool declare an issue is subject to GDPR, or PCI DSS when they don’t know if the target application contains personal or financial information? How can they declare issues as a medium level, when they don’t know if that issue would affect 1 piece of sensitive data, or 25 million? How can they declare an issue as a medium level if they don’t know how long it would take the controlling organisation to replace the database from backups and investigate?
These declarations of issues as Critical, High, Red, or as affecting regulatory compliance, can be what we call ‘Regulatory False Positives’. This means that a security or development team can prioritise those fixes incorrectly, leaving other greater risk security issues unfixed, or delayed. This has the net result that an organisation is not effectively reducing their risk, or are allowing changes to go live that present a much greater risk than understood.
This is why further measures for impact have been developed, such as the FAIR Institute risk taxonomy, which is a global standard contributed to by over 3000 member companies. The FAIR model brings these other aspects of fines, downtime, and likelihood of exploitation directly into the risk impact measurement.
This is the base risk measurement used by the Uleska Platform. Whilst this risk can be presented as required by the user, the FAIR Institute methodology is used as the base calculation. To note, the Uleska Platform allows risk to be visualised and represented as follows:
For more details, view our blog on How the Uleska Platform Automatically Calculates Risk.