Beyond CVSS - how to better evaluate the risk of cyber issues

Why Measure Risk of Technical Issues?

The cyber security industry has evolved how it has measured the impact of technical security issues over the last 20 years.  It’s needed to, as the discovery of a technical security issue is only half the story - the impact of that technical issue drives the priority, to the business, of addressing that issue.

Security issues, just like other technical and software bugs, can vary greatly in terms of their impact.  With the increase in regulation, and costs of cyber breaches running into millions, the ability to effectively measure the impact, or cost, a security vulnerability could have to the business has grown in importance.


Let’s take an example.  Let’s suggest an organisation has two SQL Injection flaws.  

The first example is in an internal system, with only public data concerning meeting rooms, with only 2-5 users who are not technical and are vetted employees, in an organisation where the ability to coordinate meeting rooms is not critical to their core business.  This SQL Injection flaw has a low likelihood of being exploited by the few non-technical employees, and even if they did, there is not much to gain on behalf of the attacker, and not much to lose on behalf of the organisation.

The second example is an external-facing system, with sensitive data in the remit of numerous compliance regulations, potentially open to any user on the internet, in a public organisation where the use of such data is vital to the operation of the business.  Here you have a breach similar to the Talk Talk hack, which ended up costing over £60m to the business.

These two security issues had exactly the same technical flaw, an SQL Injection, however, the impact of the exploitation of that flaw is so much more important.



What Measures Exist?

Common measurements for the impact of security issues include priorities such as Critical, High, Medium, and Low, or colour codes such as Red, Amber, Green.  These measurements allow for easy identification and prioritisation of security issues.  This ease of measurement is typically required for policy implementation, and due to the number of security issues that may be identified.

Prioritisation is important in many organisations due to:

  1. There are often more issues identified than a business has time to fix.  When faced with 10,000 security issues, a business cannot stop operation until all are fixed.  Instead, they must be prioritised, and the highest priority addressed first.
  2. Security programs operate in frequently changing environments, and understand that new security issues are likely to be introduced as IT and software changes when new features or updates are put online.  When these new security issues are introduced, a risk-based approach can be taken to determine if the new change can be placed online, depending on the priority of risk introduced.  For example, it’s not uncommon to see programs state something like “New changes cannot go online if any Critical security issues are found”, or “High-security issues must be fixed within 30 days, while Medium security issues must be fixed within 90 days.”


The questions that can arise from High. Medium, Low, and Red, Amber, Green classifications, is what do they mean?  What is a High level issue?  Who decides if an issue is Medium or not, and how do they consistently do that?



Common Vulnerability Scoring System

The security industry then made popular use of the CVSS to provide a number, between 0.0 and 10.0, for security issues.  This system is popular today and is a common lexicon for describing the impact of a technical vulnerability.  An issue that is a 9.5 is much more impactful than a 2.4.  

Many organisations can then use CVSS to map to High/Medium/Low and Red/Amber/Green classifications.  For example, anything over a 7 might be a High, and anything over 3 could be a Medium.  CVSS itself has some suggestions for this mapping.

CVSS brings more aspects of a security issue into its calculation.  These include (for base):

  • Attack Vector (can the attack be remote, or need to be physical, etc)
  • Attack Complexity (can anyone hack it, or do you need skills)
  • Privileges Required (do you need to be logged in)
  • User Interaction (does the person being hacked need to action anything)
  • Scope (can a hack affect other systems)
  • Confidently (of the data involved)
  • Integrity (of the data involved)
  • Availability (of the data involved)


This is great, as now we can say that one security issue has a greater impact than another, for example if it is easier to attack remotely, therefore has a much higher chance of being breached or could leak confidential information, as opposed to non-confidential information.  Security issues are then evaluated by a security professional and assigned a CVSS number.  This is good, if hard to scale, as this CVSS number would need to be provided for all issues before proper prioritisation of fixes can begin.

However CVSS does not consider some aspects of a technical security issue that are more vital to the actual impact, such as:

  1. Could the breach affect data protected under regulatory compliance - which can then lead to large fines?
  2. How many data items are we talking about - 1 passport number  - or 25 million passport numbers?
  3. Is the controlling company publicly traded, or what is its turnover, which affects certain regulations such as GDPR?
  4. Could this breach bring a critical system down, or take time to replace, resulting in further costs to the core business?  Think Travelex being down for weeks and the cost of this to the business, as well as the direct cyber costs?


These are commonly lacking in cybersecurity tools.  For example, how can a tool declare an issue is subject to GDPR, or PCI DSS when they don’t know if the target application contains personal or financial information?  How can they declare issues as a medium level, when they don’t know if that issue would affect 1 piece of sensitive data, or 25 million?  How can they declare an issue as a medium level if they don’t know how long it would take the controlling organisation to replace the database from backups and investigate?

These declarations of issues as Critical, High, Red, or as affecting regulatory compliance, can be what we call ‘Regulatory False Positives’.  This means that a security or development team can prioritise those fixes incorrectly, leaving other greater risk security issues unfixed, or delayed.  This has the net result that an organisation is not effectively reducing their risk, or are allowing changes to go live that present a much greater risk than understood.



Effectively Calculating Business Risk

This is why further measures for impact have been developed, such as the FAIR Institute risk taxonomy, which is a global standard contributed to by over 3000 member companies.  The FAIR model brings these other aspects of fines, downtime, and likelihood of exploitation directly into the risk impact measurement.

This is the base risk measurement used by the Uleska Platform.  Whilst this risk can be presented as required by the user, the FAIR Institute methodology is used as the base calculation.  To note, the Uleska Platform allows risk to be visualised and represented as follows:

  • Red/Amber/Green
  • Critical/High/Medium/Low
  • “Security Score” (between 1 and 1000)
  • CVSS
  • Cyber Value-at-risk (monetary risk estimate)


For more details, view our blog on How the Uleska Platform Automatically Calculates Risk.


Continue reading

Subscribe to our newsletter for great cyber security resources and news.

No spam!