Risk Management

How does the Uleska Platform's automated risk management work?

The Uleska Platform can automatically calculate the risk of security vulnerabilities using an implementation of the popular FAIR Risk Methodology used by many in the industry, including NIST. This page describes how the Uleska Platform estimates risk using FAIR. For more details on the FAIR risk model you can go to the Fair Institute at https://www.fairinstitute.org/.

Note: the Uleska Platform can estimate the risk based solely on the technical vulnerability, and the default application context information supplied. It does this to provide better prioritization out of the box.  The documentation below describes (at a high level) how the risk model is applied, and describes a number of inputs and configuration options you can use.  However, don't feel that you're going to have the chore of setting risk inputs often.  Think of modifications to the application context and data sensitivity as refinements as you mature and improve your security and risk processes.


Why Use a Risk Model?

Risk models help teams focus on what is important, especially when you are working in an environment with more security issues than you have time to fix. These risk models move beyond the ‘Critical’, ‘High’, ‘Low’, etc of older models that only look at the technical issue (more on this later). There are many advantages of using a risk model in your security program:

  • Many security regulations require you to take a ‘risk-based approach’ to manage vulnerabilities.
  • Risk estimation values for each issue give an implicit prioritization across many issues. A $450,000 risk value issue is a higher priority than a $1,000 risk issue.
  • Risk methodologies take in more information than just the technical bug, such as the environment of the application, number of users, business criticality of the system, and sensitivity of data involved.
  • Risk models can better show that work related to a security bug is helping the business. Where you have 100s or 1000s of security bugs in your security backlog, if you work on 20 £1,000 risk issues you’ll see the metrics drop a bit. If you spend the same time working on 20 £500,000 risks, you’ll see the metrics drop a lot faster. This helps teams show they’re working on the right issues, leaving lower risk issues to later, and showing stakeholders how the real risk to the business is dropping.

Risk models allow you to understand more about the impact of a security issue (how much it would impact your bottom line) and the likelihood (probability) of it happening (definitely going to happen, or only if the planets align). You’ll likely have seen this correlation between impact and likelihood in the following type of diagram that’s popular on social channels:


Cartoon by an unknown person describing risk probability and impact.  (if you know who created this, let us know and we can credit them)


Let’s cover a good (if extreme) example. Let’s say we find two SQL injection vulnerabilities; one SQL injection in Project A, and one SQL injection in Project B. Likely a technical tool finding such an issue would list it as ‘High’ or ‘Critical’, but are all SQL injection issues equal?

Let’s say Project A is an internal project, with 2-3 internal users, no sensitive data, and nothing that’s critical to the business. Project A is the web app the cleaners use to map which rooms are going to be cleaned each day (very important to meetings, but not mission sensitive).

With a small number of users, who are likely not skilled in the dark hacking arts, and an internal system, the likelihood (probability) of this SQL injection being exploited is low.

Given there’s no personal, financial, or other sensitive data to exploit if you did, the impact of the flaw being breached isn’t going to make the front pages.

Now let’s say Project B is the company's main product, with millions of external users, each storing passport numbers, financial details, personal info. This system going down for 5 days would likely put the company out of business.

The exposure of the system to potential attackers on the internet (anyone can sign up), who may be skilled, and may see you as a juicy target, could increase the chances of being hacked.

And with lots of interesting data for someone to steal, and the consequence if the SQL injection wrecked the database meaning days to get the system back up, means the impact of such a breach is major to the business. Not to mention the fines and other costs to customers. This impact would be similar to the Talk Talk hack from years ago.

As we see - two SQL injections (meaning the technical bug is exactly the same), with very different likelihoods and impacts based on the surrounding nature of the project. This is why Risk Models elaborate on the technical information of the issues and use their model to give a better estimation of the risk associated with the issue.


Why am I Seeing Dollar Signs?

Varying risk methodologies represent the risk in different ways - some use currency values to represent a potential financial risk, some use ranged numbers between 0 and 1000. While Uleska will support other options in the near future, our platform currently represents risks as dollar values for a few reasons:

  • Risk models can help software teams discuss the relevance of issues, for example in sprint meetings. Instead of discussing abstracts such as XSS or CSRF issues, or 2 ‘Criticals’ and 5 ‘Medium’ issues, the discussion can be around a $150,000 risk to the application/company, and how that relates to the other items being scheduled for the next sprint.
  • Money values can mean very different things to different companies - everyone has their own risk appetite. For a start-up, a £200,000 risk could put them out of business before they even get going - they’d work to fix that ASAP. For a tier-1 bank, a £200,000 risk may not be the biggest risk identified that hour, and would likely get added to the backlog for someone to come along and fix later that year.

How does the Uleska Platform Automatically Calculate Risk?

Let’s get the meat and bones - how does the Uleska Platform calculate risk? The system combines three main aspects:

  1. The nature of the data processed by the project, such as personal info, financial, healthcare, IP, etc.

  2. The context of the application, such as if it’s run internally or externally, how business-critical it is, downtime costs, etc.

  3. The technical vulnerability returned by the security tools.

Untitled (18)

The application context and nature of the data (1 & 2 above) can be easily set when adding the application to the Uleska Platform (or in bulk via the API). All of these values and calculations are based on the FAIR methodology.


1) Data Sensitivity

Your application will work on data, and that can be what attackers are after. For an application, you can list the types of data involved so the FAIR calculation can estimate the risk to data leakage or destruction. If you have 1 million users, and handle financial and personal information, then that’s a lot of data that could be impacted.

When you add a version you can specify some of your web pages or source code files, and the type of data involved. This bit could take some time to fill in so we’re working hard to simplify this input. Our advice is to add resources based on the issues found by your scanning tools - see the ‘Resource’ attribute of the issues.

Also, note that the Uleska Platform will continue to give you a risk rating even without this data sensitivity being set.

Click on the edit button for your version, click on the ‘Web Pages’ menu item (also applies to source code files, containers, etc), and enter a resource:

  • Path - path to a dynamic URL, source code file, library, container, etc. (it can be best to take this from the issues raised)
  • Description - nuff said
  • Affect Assets - drop this down to select the type of data affected by this path/application:
    • Public
    • Personal (PII)
    • Finance
    • Healthcare
    • IP (and suggested value of any IP)
  • Click on the ‘Save WebPage’ to save that resource and add more if needed.
  • Be sure to click ‘Save’ or ‘Save & Continue’ after adding web page resources to save them to your project.


2) Application Context

When you added an application you can set a few items that help the platform estimate your risk. There are a few defaults already in there to help you, or you can modify them to match your application context.

Just click ‘Add Application’ from the main screen and you can modify:

  • Number of users of the application (even a ballpark figure)
  • Set the application as internal/external or mark the authentication status
  • Cost of the application being down per day, i.e. is this system critical to your business and would cost $100,000 in lost revenue per day if down, or just $5?
  • Cost to restore the application if it is needed, as part of the FAIR model calculation, if the entire system needed to be restored from backup, what would be the potential cost in IT resources, plus risk from lost data?

Untitled (19)

3) Technical Vulnerability

Information about the technical bug comes from the issues returned from the security tools you add to your Toolkits. The Uleska Platform uses the CVSS string to determine important aspects of the vulnerability and applies them to the Risk model. You can learn more about CVSS from https://www.first.org/cvss/calculator/3.0 .

CVSS strings have great information about the issue, such as:

  • Attack Vector: Can it be attacked from the internet, on the local OS, or does someone need to physically be on the computer. This informs part of the likelihood of attack.
  • Attack Complexity: Is it simple to exploit, are there automated scripts to reproduce, or do you need to be a super expert to breach it? Again leads to likelihood.
  • Privileges Required: What level of privileges do you need to exploit.
  • User Interaction: Does another user need to be involved, or be doing something, to be exploited.
  • Scope: Does breaching this vulnerability change the scope of what you can attack? I.e. by attacking this vulnerability in one system, does it let you into others? Leans to impact.
  • Confidentiality: Would exploiting the bug leak sensitive information? Leans to impact.
  • Integrity: Would exploiting the bug allow you to change or destroy sensitive information? Leans to impact.
  • Availability: Would exploiting the bug block the application, or deny access to information or functionality? Leans to impact.

There are 3 ways the Uleska Platform can learn the CVSS score of a bug returned by the security tools:

  1. The security tool returns CVSS scores in its responses to the Uleska Platform. In this case we can use the score supplied by the security tool.
  2. Not all security tools return CVSS scores (in fact most don’t). If the security tool returns a CWE category, then Uleska can map that to a CVSS score and use that instead.
  3. Many security tools return neither a CVSS score, nor a CWE. In this case the Uleska Platform can learn the CVSS you want to apply to the issue type, and use it going forward.

To round off this last point on Uleska Platform learning your CVSS preferences, let’s look at how this works. If a security issue is found that does not have a CVSS or CWE value, the risk will be represented as “Not Calculated”. You can tell the Uleska Platform what CVSS you wish to assign to issues such as this, and it will remember that mapping, and assign that CVSS score to all other instances of that issue when they are found (thus saving you setting it every time).

To set the CVSS for an issue, see the Setting ASVS or CVSS page.


Modifying Risk Likelihood

The Uleska Platform comes with pre-set risk Value at Risk likelihood settings which should suit most companies. Many will like to set these values to match their own risk programs, and you can set many risk settings in the platform.

Some of these settings will be technical to the FAIR algorithms. They can be found under the “Configuration” menu option, in the “Value at Risk” tab. Setting these values will affect all projects under your platform. These values may be left at defaults, or set initially for your security program.

The values that can be set include:

  • Max Fine - sets the maximum fine you believe could be applied to your organization based on our territory, nature of business, and regulations. E.g. what would 4% of turnover be for a max GDPR fine? What is the largest fine for financial institutions similar to yours?
  • Reputation value per user - estimation of the cost each user could suffer if a breach occurred (this is 0 by default).
  • Response cost per day - regardless of the type of breach or bug encountered, how much does it typically cost your business to fix technical bugs (how much do your devs cost)?
  • Single response cost - any breach or bug can result in hours of meetings, PR, social media apologies, etc. What cost do you put on these actions?
  • Range Percentage - many risk professionals prefer risk to be estimated in ranges instead of median values, e.g. this risk is between £10,000 and £100,000. Set this to provide a range around the median value to return (e.g. 30 for 30%)
  • Loss Event Frequency Modifiers - how high a risk is High? The Risk methodology we use maps issues to a frequency estimation, “Very High” through to “Very Low”, but what exactly does this mean? Is an issue that could occur with a “Very High” frequency going to happen every year, or once every 5 years? Setting the “Very High” to ‘0.2’ (for example) indicates you think an issue that our FAIR algorithm has determined as Very High would happen once every 5 years.

If you modify any of these values, you can click the ‘Save’ button to apply the new risk values to all new security scans and issues found. If instead, you would like to have the new risk values apply to all existing scan data, as well as new scans, then click on the ‘Save & Recalculate” button, and wait for the green confirmation that all risk has been updated.