Loss Event Frequency is the answer to "how many times per year do we expect this scenario to actually cost us money?" It is the product of Threat Event Frequency (how often the bad thing is attempted) and Vulnerability (the probability that an attempt succeeds against the controls in place).
LEF is where the value of a control investment becomes visible. Spending money to harden a system reduces Vulnerability, and a 20-point reduction in vulnerability translates to a proportional reduction in LEF, which feeds directly into Annual Loss Expectancy. That is the chain that lets a CISO say "this 200,000 euro investment cuts the median ALE for our top scenario by 1.4 million" rather than "this investment improves our security posture".
LEF is also the right answer to "are we being attacked?" If the analysis says LEF is 0.4 per year for a given scenario, you should expect a loss event roughly every two and a half years. If the actual rate is materially higher, the model is wrong (and the right thing to do is recalibrate, not abandon the model). If it is materially lower, controls are working better than projected and the analysis is conservative.



