Skip to main content

Threat Event Frequency

Also known as: TEF

The expected number of times per year that a threat actor will attempt the action that could lead to a loss event, before any consideration of whether the attempt succeeds.

Written by Askara Solutions editorial team · Updated

Threat Event Frequency is the upstream factor in a FAIR analysis. It estimates how often a particular threat actor type tries to do the thing that could cost you money. A privileged-user mistake might happen 50 times a year. A targeted phishing campaign might happen twice. A nation-state wiper attack might happen once every 50 years.

TEF is intentionally separated from whether the attempt succeeds. That separation is the difference between FAIR and a heatmap. A heatmap collapses "frequent attempts that almost always fail" into the same bucket as "rare attempts that almost always succeed", and the resulting risk score loses the information you need to decide whether to invest in prevention or in resilience.

Estimating TEF in a workshop is one of the parts the Risk Investigation Agent guides through directly. The worst answer is a precise point estimate. The right answer is a three-point estimate (minimum plausible, most likely, maximum plausible) drawn from people who handle the relevant systems. Industry threat intelligence reports help calibrate, but the estimate should reflect your environment, not a Verizon DBIR average.

Related terms

  • FAIR

    Quantitative risk-analysis methodology that expresses cyber risk as financial loss exposure rather than ordinal severity scores, by decomposing it into loss event frequency and loss magnitude.

  • Loss Event Frequency

    The expected number of times per year that a threat actor's action will succeed and produce a loss, calculated as Threat Event Frequency multiplied by Vulnerability (the probability of success).

  • Annual Loss Expectancy

    The expected annual cost of a risk scenario in financial terms, calculated as Loss Event Frequency multiplied by Loss Magnitude, expressed as a probability distribution rather than a point estimate.

  • Monte Carlo Simulation

    Computational technique that samples input variables from their probability distributions and aggregates the outcomes, producing a distribution of plausible results rather than a point estimate.

Authoritative sources

Where to read more.