Skip to main content

FAIR

Also known as: Factor Analysis of Information Risk

Quantitative risk-analysis methodology that expresses cyber risk as financial loss exposure rather than ordinal severity scores, by decomposing it into loss event frequency and loss magnitude.

Written by Askara Solutions editorial team · Updated

FAIR is the framework that broke cyber risk out of red, amber, green. It treats a risk scenario as a probability distribution rather than a colour, and expresses the answer in the same units the rest of the business uses (euros or dollars per year).

The mechanics are deliberately simple. Loss is decomposed into loss event frequency (how often the bad thing happens) and loss magnitude (how much it costs when it does). Each factor is further decomposed: frequency into threat event frequency and the susceptibility of the controls; magnitude into primary and secondary loss. Each leaf node is a three-point estimate (minimum, most likely, maximum) supplied by people who know the business. A Monte Carlo simulation rolls the distributions up.

What you get is annual loss expectancy expressed as a range, not a heatmap colour. That changes the conversations you can have. A board can compare risk to control investment; an underwriter can assess your insurance ceiling; a procurement team can build risk-based supplier requirements. None of those work when risk is expressed as "high" or "medium". The Risk Investigation Agent uses FAIR (specifically the Open FAIR variant) as its quantification engine.

Related terms

  • Open FAIR

    Open standard governed by The Open Group that codifies the FAIR risk-quantification methodology as a reference taxonomy. The Open FAIR Body of Knowledge is the canonical specification.

  • Annual Loss Expectancy

    The expected annual cost of a risk scenario in financial terms, calculated as Loss Event Frequency multiplied by Loss Magnitude, expressed as a probability distribution rather than a point estimate.

  • Threat Event Frequency

    The expected number of times per year that a threat actor will attempt the action that could lead to a loss event, before any consideration of whether the attempt succeeds.

  • Loss Event Frequency

    The expected number of times per year that a threat actor's action will succeed and produce a loss, calculated as Threat Event Frequency multiplied by Vulnerability (the probability of success).

  • Monte Carlo Simulation

    Computational technique that samples input variables from their probability distributions and aggregates the outcomes, producing a distribution of plausible results rather than a point estimate.

  • Risk Register

    The single source of truth recording every identified risk, its assessment, the control treatment chosen, the owner, and the review date.

Authoritative sources

Where to read more.