The Uleska Platform can automatically calculate the risk of security vulnerabilities using an implementation of the popular FAIR Risk Methodology used by many in the industry, including NIST. This page describes how the Uleska Platform estimates risk using FAIR. For more details on the FAIR risk model you can go to the Fair Institute at https://www.fairinstitute.org/.
Note: the Uleska Platform can estimate the risk based solely on the technical vulnerability, and the default application context information supplied. It does this to provide better prioritization out of the box. The documentation below describes (at a high level) how the risk model is applied, and describes a number of inputs and configuration options you can use. However, don't feel that you're going to have the chore of setting risk inputs often. Think of modifications to the application context and data sensitivity as refinements as you mature and improve your security and risk processes.
Risk models help teams focus on what is important, especially when you are working in an environment with more security issues than you have time to fix. These risk models move beyond the ‘Critical’, ‘High’, ‘Low’, etc of older models that only look at the technical issue (more on this later). There are many advantages of using a risk model in your security program:
Risk models allow you to understand more about the impact of a security issue (how much it would impact your bottom line) and the likelihood (probability) of it happening (definitely going to happen, or only if the planets align). You’ll likely have seen this correlation between impact and likelihood in the following type of diagram that’s popular on social channels:
Cartoon by an unknown person describing risk probability and impact. (if you know who created this, let us know and we can credit them)
Let’s cover a good (if extreme) example. Let’s say we find two SQL injection vulnerabilities; one SQL injection in Project A, and one SQL injection in Project B. Likely a technical tool finding such an issue would list it as ‘High’ or ‘Critical’, but are all SQL injection issues equal?
Let’s say Project A is an internal project, with 2-3 internal users, no sensitive data, and nothing that’s critical to the business. Project A is the web app the cleaners use to map which rooms are going to be cleaned each day (very important to meetings, but not mission sensitive).
With a small number of users, who are likely not skilled in the dark hacking arts, and an internal system, the likelihood (probability) of this SQL injection being exploited is low.
Given there’s no personal, financial, or other sensitive data to exploit if you did, the impact of the flaw being breached isn’t going to make the front pages.
Now let’s say Project B is the company's main product, with millions of external users, each storing passport numbers, financial details, personal info. This system going down for 5 days would likely put the company out of business.
The exposure of the system to potential attackers on the internet (anyone can sign up), who may be skilled, and may see you as a juicy target, could increase the chances of being hacked.
And with lots of interesting data for someone to steal, and the consequence if the SQL injection wrecked the database meaning days to get the system back up, means the impact of such a breach is major to the business. Not to mention the fines and other costs to customers. This impact would be similar to the Talk Talk hack from years ago.
As we see - two SQL injections (meaning the technical bug is exactly the same), with very different likelihoods and impacts based on the surrounding nature of the project. This is why Risk Models elaborate on the technical information of the issues and use their model to give a better estimation of the risk associated with the issue.
Varying risk methodologies represent the risk in different ways - some use currency values to represent a potential financial risk, some use ranged numbers between 0 and 1000. While Uleska will support other options in the near future, our platform currently represents risks as dollar values for a few reasons:
Let’s get the meat and bones - how does the Uleska Platform calculate risk? The system combines three main aspects:
The nature of the data processed by the project, such as personal info, financial, healthcare, IP, etc.
The context of the application, such as if it’s run internally or externally, how business-critical it is, downtime costs, etc.
The technical vulnerability returned by the security tools.
The application context and nature of the data (1 & 2 above) can be easily set when adding the application to the Uleska Platform (or in bulk via the API). All of these values and calculations are based on the FAIR methodology.
Your application will work on data, and that can be what attackers are after. For an application, you can list the types of data involved so the FAIR calculation can estimate the risk to data leakage or destruction. If you have 1 million users, and handle financial and personal information, then that’s a lot of data that could be impacted.
When you add a version you can specify some of your web pages or source code files, and the type of data involved. This bit could take some time to fill in so we’re working hard to simplify this input. Our advice is to add resources based on the issues found by your scanning tools - see the ‘Resource’ attribute of the issues.
Also, note that the Uleska Platform will continue to give you a risk rating even without this data sensitivity being set.
Click on the edit button for your version, click on the ‘Web Pages’ menu item (also applies to source code files, containers, etc), and enter a resource:
When you added an application you can set a few items that help the platform estimate your risk. There are a few defaults already in there to help you, or you can modify them to match your application context.
Just click ‘Add Application’ from the main screen and you can modify:
Information about the technical bug comes from the issues returned from the security tools you add to your Toolkits. The Uleska Platform uses the CVSS string to determine important aspects of the vulnerability and applies them to the Risk model. You can learn more about CVSS from https://www.first.org/cvss/calculator/3.0 .
CVSS strings have great information about the issue, such as:
There are 3 ways the Uleska Platform can learn the CVSS score of a bug returned by the security tools:
To round off this last point on Uleska Platform learning your CVSS preferences, let’s look at how this works. If a security issue is found that does not have a CVSS or CWE value, the risk will be represented as “Not Calculated”. You can tell the Uleska Platform what CVSS you wish to assign to issues such as this, and it will remember that mapping, and assign that CVSS score to all other instances of that issue when they are found (thus saving you setting it every time).
To set the CVSS for an issue, see the Setting ASVS or CVSS page.
The Uleska Platform comes with pre-set risk Value at Risk likelihood settings which should suit most companies. Many will like to set these values to match their own risk programs, and you can set many risk settings in the platform.
Some of these settings will be technical to the FAIR algorithms. They can be found under the “Configuration” menu option, in the “Value at Risk” tab. Setting these values will affect all projects under your platform. These values may be left at defaults, or set initially for your security program.
The values that can be set include:
If you modify any of these values, you can click the ‘Save’ button to apply the new risk values to all new security scans and issues found. If instead, you would like to have the new risk values apply to all existing scan data, as well as new scans, then click on the ‘Save & Recalculate” button, and wait for the green confirmation that all risk has been updated.