Identity risk is the most common entry point for enterprise breaches, and yet it remains one of the hardest risks to quantify in financial terms. Security leaders know their IGA backlogs are long, their access reviews are incomplete, and their SaaS and Cloud estates are sprawling — but when the CFO asks "how much does this actually cost us," the answer is usually a shrug dressed up in a risk matrix.
FAIR® - Factor Analysis of Information Risk - was built to solve exactly this type of problem. It gives security teams a structured, probabilistic framework for translating qualitative risk narratives into financial loss estimates that executives can evaluate, fund, and act on. And when you apply the FAIR lens to identity risk through an identity visibility and intelligence platform (IVIP) lens, something important happens: risks that have been invisible to individual identity tools become modelable and measurable in a budget and business risk conversation.
What Is FAIR, and Why Does Identity Need It?
FAIR is an internationally recognized risk quantification standard maintained by the FAIR Institute. At its core, FAIR breaks risk into two components: Loss Event Frequency (how often a bad thing happens) and Loss Magnitude (how much it costs when it does). From those two variables, FAIR builds a probabilistic model typically expressed as an Annualized Loss Expectancy (ALE) dollar value. The ALE gives decision-makers a financial envelope to work with rather than a color on a heat map.
Most organizations still assess identity risk qualitatively. They assign "High / Medium / Low" ratings based on gut feel, compliance posture, or the most recent audit finding. These ratings are defensible in a checkbox sense but nearly useless for resource allocation. When two risks are both rated "High," you have no basis for choosing which to fund first. When you tell a board that identity risk is "elevated," you haven't said anything they can act on.
FAIR gives you the mechanism to say instead: "Our current identity posture creates an annualized expected loss of between $3.2M and $11.4M, with the primary driver being an overprivileged account takeover enabled by phishable credentials." That is a sentence a CFO can respond to.
What Does FAIR Actually Measure in an Identity Context?
FAIR's model works by decomposing risk into factors that can each be estimated independently. For identity risk, the relevant factors map cleanly onto the failure modes we observe in real-world breaches such as orphaned accounts, lack of phishing-resistant MFA, excessive privileges, dark web exposed passwords, shadow accounts, etc.
Loss Event Frequency has two sub-components: Threat Event Frequency (how often does an attacker attempt to exploit this?) and Vulnerability (how likely is the attempt to succeed?). In identity terms: how often are your credentials being targeted on dark web marketplaces and credential-stuffing campaigns, and how likely is a compromised credential to result in successful access given your current controls?
Loss Magnitude splits into primary losses (direct costs of the breach: investigation, notification, remediation, ransoms) and secondary losses (regulatory fines, litigation, customer churn, reputational damage). The IBM Cost of a Data Breach report gives you defensible anchor data for both categories — the 2024 global average sits at $4.88 million per incident, with credential-based breaches and those involving identity system compromise trending toward the higher end of the distribution.
Why Is Identity Risk So Hard to Quantify Without an IVIP View?
Here is where an IVIP lens like Axiad Mesh becomes essential, because FAIR requires you to accurately estimate vulnerability — and you cannot accurately estimate vulnerability for risks you cannot see.
Traditional IAM tools each govern a slice of the identity surface. IGA manages provisioned accounts. SSPM audits SaaS configurations. CIEM monitors cloud IAM permissions. The IdP enforces authentication policy. Each tool produces its own risk signals, scoped to its own domain. What none of them produce is a unified view of what a given human identity can access across the full environment — and that unified view is what FAIR actually needs.
Consider a departing employee whose Okta account has been disabled. IGA reports clean. SSPM shows no anomalies. CIEM has a backlog item about an orphaned cross-account IAM role, filed under non-human identity risk. GitHub shows org membership revoked. What no individual tool surfaces: the cross-account role trust was created by this employee, points to a personal AWS account, and still provides access to a production logging environment. The personal account authenticates via Gmail and a default non-corporate MFA. A personal OAuth token for the GitHub org is still active.
If you tried to run a FAIR model on "risk from terminated employee access" using only the signals your individual tools provide, you would dramatically underestimate both frequency and vulnerability — because the access pathways that remain open are invisible to your tooling. Your FAIR model is only as accurate as your visibility, and siloed tools produce siloed visibility.
An Axiad Mesh IVIP addresses this by building a correlated identity graph mapping every human identity to every account, risk factor, and access pathway attributable to it, across the full application and infrastructure estate. When you run FAIR against an Axiad Mesh-informed identity posture, the vulnerability inputs change materially because you now know what you didn't know you didn't know.
How Do You Apply FAIR to Specific Identity Risk Scenarios?
Let's walk through three concrete FAIR applications using Mesh-surfaced identity risks as the inputs.
Scenario 1: Orphaned Accounts and Credential Exposure
A Mesh solution discovers 847 accounts across the SaaS estate that are attributable to human identities no longer in the HR system - departed employees, contractors whose engagements ended, M&A integrations where the acquired company's identities were never fully rationalized. Of these, 212 have credentials that appear in dark web breach datasets.
FAIR inputs: Threat Event Frequency is high — credential-stuffing campaigns continuously target exposed credentials found in breach datasets, and automated tooling makes the targeting cost near-zero for attackers. Vulnerability is elevated — 212 accounts have confirmed credential exposure with no MFA enforcement on a subset. Loss Magnitude anchors to the IBM benchmark, adjusted for your industry vertical and the sensitivity of data accessible through those accounts.
Output: An annualized expected loss that gives you the financial justification for investing in mitigation tools with a specific dollar figure attached to the risk of not acting.
Scenario 2: OAuth Permission Sprawl
Axiad Mesh discovers that your Salesforce environment has 340 active OAuth connections granted by individual users over time, many without IT awareness or approval. Of those, 67 connect to third-party applications that have had publicly disclosed security incidents in the past 18 months. Fourteen grant admin-level permissions to the Salesforce environment.
FAIR inputs: Threat Event Frequency is driven by the supply chain attack pattern where compromise of a third-party integration provider cascades into hundreds of downstream customer environments via OAuth token theft. Vulnerability is a function of whether those 14 admin-scoped tokens, if compromised, would be detected by any monitoring system currently in place. For most organizations, the answer is no. Loss Magnitude includes not just data exposure costs but regulatory exposure under contracts with customers whose data lives in Salesforce.
Output: A financial model that quantifies the risk of ungoverned OAuth sprawl in terms an executive can evaluate against the cost of an investment in OAuth governance. This turns a "we should probably do something about this" conversation into a "here is the risk-adjusted return on this investment" conversation.
Scenario 3: Forgotten Temporary Access and the High-Value Human Identity
Axiad Mesh audits access grants across the GitHub organization and surfaces what the IGA missed entirely: 23 accounts with active repository access were provisioned under temporary or event-based justifications such as hackathons, vendor evaluations, or short-term projects where the business context expired long ago but the accounts still exist. One belongs to a C-suite executive whose GitHub account has no MFA configured and whose credentials appear in a third-party breach dataset, meaning the password used at provisioning is circulating on criminal marketplaces.
FAIR inputs: Threat Event Frequency is elevated since executive identities are specifically targeted because they carry both elevated access and elevated social engineering value. Plus exposed credentials in a breach dataset make automated credential-stuffing near-certain rather than merely possible. This vulnerability is critically high: no MFA means the exposed credential is a direct access key, and the forgotten GitHub grant which was provisioned outside governed workflows and therefore invisible to IGA has never been reviewed or flagged.
Loss Magnitude is where this scenario produces its most instructive FAIR output. It goes beyond primary breach costs to secondary losses for an executive identity include regulatory scrutiny, litigation exposure, and competitive harm from source code or roadmap access. In a breach like this the Loss Magnitude calculation increases materially.
What Does FAIR Change About How You Prioritize Identity Investments?
IThe most practical output of FAIR applied to identity risk is not a single number — it's a prioritized investment case. FAIR lets you compare risks that previously lived in separate risk registers maintained by separate teams, using a common financial denominator.
"Orphaned account remediation" and "OAuth governance" and "phishable credential remediation” have all been on the backlog for months. With qualitative ratings, they all show up as "High" and compete for the same budget. With FAIR-informed financial estimates, you can rank them by expected loss reduction per dollar invested and you can do it in language that the CISO can take to the CFO with a straight face.
The Axiad Mesh IVIP is what makes this possible with identity-layer specificity. FAIR applied to a siloed identity program produces estimates contaminated by unknown unknowns since there will be risks your tooling can't see and therefore your model can't account for. FAIR applied to a IVIP produces estimates grounded in the actual attack surface, including the risks that live in the white space between your existing tools. Axiad Mesh comes with an embedded FAIR calculation and report, licensed by the FAIR Institute.
The question is no longer whether you can afford to address your identity risk. With FAIR, you can quantify what it costs not to.
Get a free FAIR report for your identity attack surface from Axiad Mesh here.
FAIR is maintained by the FAIR Institute (fairinstitute.org). The FAIRmodel is an ANSI/The Open Group standard (O-RA, O-RT).






%201.avif)







