How the Uleska Platform Automatically Calculates Risk

When it comes to measuring cyber risk, it is important to have that measure reflect the true impact of any technical flaw or issue to the business.  Based on the experiences of the last 20+ years, better measures for risk impact have been iterated and developed, such as the FAIR Institute risk taxonomy.  The FAIR methodology brings these aspects risk or impact that were not previously included, such as the size and likelihood of regulatory fines, impact or cost of downtime, and the likelihood of technical exploitation by the user base, and includes them directly into the risk impact measurement.


This is the base risk measurement used by the Uleska Platform.  Whilst this risk can be presented as required by the user, the FAIR Institute methodology is used as the base calculation.  To note, the Uleska Platform allows risk to be visualised and represented as follows:

  • Red/Amber/Green
  • “Security Score” (between 1 and 1000)
  • CVSS
  • Cyber Value-at-risk (monetary risk estimate)


The Uleska Platform calculates this risk based on the many pieces of information it has access to.  At a high level these can be categorised into:

  1. The technical issue that the application is vulnerable to
  2. The nature of the application that is vulnerable
  3. The organisation that would be breached if the vulnerability is exploited



To go deeper into this information, let’s look at what the technical information first.  This is taken from the CVSS score of the vulnerability found.  Many commercial and open source security tools supply a CVSS value in their testing response feed to the Uleska Platform.  Here the platform can use this value, or allow operators to use their own.  Based on that, the Uleska platform can understand how easy/hard the issue would be to exploit if it allows a change of context if user interaction is required, and user interaction.  For tooling that does not provide CVSS numbers, these can be pre-built into the Uleska Platform (when our engineering teams onboard new security tools) or can be set by operators upon the first detection of the security issue.  These then become sticky, meaning they only need to be set or changed once.

At the application layer, the Uleska Platform quickly establishes simple but effective attributes of the application during the initial onboarding.  Information such as:

  • How many users of the application
  • Internal or external users
  • Authentication used
  • Downtime cost per day
  • Restoration costs per day
  • Nature of data used in assets of the system, including financial personal, health, etc.


Using this application-level information, the Uleska Platform can then match technical vulnerabilities (from the CVSS returned from the security tool) with the nature of data that could be exploited, the size of that data set, the attack vectors, and estimated costs for downtime/restoration.

Finally, the organization level information gives a fuller picture of how exposed the company could be to a cyber attack.  A one-time setup provides information including reputational value costs, response costs, share prices and maximum fine estimates.  This information allows the overall risk values to be completed with estimates of the potential financial impact of this security issue.  If a technical issue might bring an application down for 5 days, instead of 1, then there will be a greater reputational and response cost.



How is the Risk Calculated?

The Uleska Platform then uses the information, which is mostly automatically collected, to immediately estimate the risk of each and every vulnerability, as they are returned to the Uleka Platform by the security tool.  This is performed through a calculation engine based on the FAIR Institute methodology for calculating risk from technical issues.  Whilst the FAIR Institute open source downloads provide far more in-depth detail on how this is done, an overview is provided below.


Step 1: Retrieve the technical issues from security tests

The Uleska Platform retrieves the security responses from security tools and marks the CVSS string of each issue.  This informs it of how difficult the security issue is to breach, and the technical impact on data (can it be leaked, modified, etc.).  This provides the technical loss event measurement, and the threat event frequency (how often this technical issue could be attacked).


Step 2: Combine technical information with the nature of the application

The information from step 1 is represented internally and combined with pre-configured application information to understand the size of any affected data set, the threat community size (number of users), the nature of those users (exposed to the internet or internal users).  This provides us with the threat community, threat capability, and resistance strength.


Step 3: Establish the impact related to the sensitive nature of the data

The assets affected by the particular technical vulnerability are then cross-referenced with sensitive data in the system.  For example, is financial data about to be leaked or modified, could personal or health care data be exposed, and similar.  This identifies the asset class in play.


Step 4: Use the FAIR Institute methodology to calculate the potential loss

Internal calculations then use logic derived from the FAIR Institute to determine the likelihood of the vulnerability being exploited, and the loss event frequency measurement.  This results in a Primary Loss measurement which is then presented in the methods described before, such as monetary values, “Security Score”, H/M/L, R/A/G, or the base technical CVSS can be presented also.



Configurability

The inputs and considerations used in the Uleska Platform risk calculations are highly configured.  Whilst defaults exist for every input, many aspects can be configured including:

  • Maximum fine estimates
  • Response cost estimates
  • Loss event frequency modifiers (for Very High, High, Medium, Low, and Very Low)
  • Numbers of users of an application
  • Downtime and restoration cost estimates
  • Sensitive data types related to regulatory compliance


Furthermore, risk scores for individual issues can be modified by the user, either through the user interface, or API.  

System-wide, all risk values can be updated upon the event that system configuration changes occur, such as changes to the loss event frequency modifiers, maximum fines, and response costs.


Continue reading

Subscribe to our newsletter for great cyber security resources and news.

No spam!