Vulnerability is the half of the equation that controls can actually move. The frequency of attempted attacks is largely outside the organisation's control; the probability that an attempt succeeds is shaped almost entirely by the controls in place. That is why most security spend ends up reducing vulnerability rather than chasing threat actors, and why the FAIR decomposition makes vulnerability a separate factor in the loss-event-frequency calculation.
The word is overloaded in everyday use. In a technical-engineering context, a vulnerability is a specific weakness in software or configuration, often with a CVE identifier and a CVSS score attached. In a risk-analysis context, vulnerability is the broader susceptibility of an asset or control to a threat action, expressed as a probability between zero and one and informed by the realistic capability of the threat actor. The two senses meet in practice: a hardening decision that closes a CVE feeds into a lower vulnerability estimate, which in turn lowers loss event frequency in the affected scenarios.
The discipline of estimating vulnerability is where many risk programmes thin out. The temptation is to default to "we have controls, so vulnerability is low" without testing the assumption. A more honest approach calibrates against incident history, red-team exercises, and the realistic threat capability rather than the design intent of the controls. The Risk Investigation Agent maintains the vulnerability estimates alongside the controls that drive them, so a change in the control regime is reflected in the analysis rather than reasoned about separately.



