Proving who’s real in the age of AI fraud

AI-powered fraud is happening at a rate that would have been hard to predict a few years ago. Deepfakes, fleshed out synthetic identities, and model injection attacks are no longer speculative. They are actively undermining trust in digital systems.
When the UK’s Online Safety Act went live a few months ago, the headlines were full of examples of people bypassing the checks with everything from obviously fake IDs to game mods. Systems that took years and billions to build, defeated by teens in minutes.
For C-level leaders, risk and compliance officers, and identity architects, the defining challenge of this decade is simple to say and hard to solve: how do you prove who is real?
The rise of AI-enabled fraud
Fraud has always followed the money, but artificial intelligence has drastically transformed the economics in the attacker’s favour. Generative models now allow attackers to clone voices in seconds, fabricate documents indistinguishable from the originals, and launch deepfake video calls that fool trained professionals.
In one widely reported case, fraudsters used a video deepfake of a CFO to trick staff into transferring $25 million.
At the same time, synthetic identity fraud (the use of made-up identities, sometimes cobbled together from fragments of real ones) is the fastest-growing financial crime, costing lenders billions annually.
The use of AI on the defender’s side has led to an entirely new category of threat: AI injection attacks. Where deepfakes mimic humans, injection attacks manipulate the AI models themselves.
Fraudsters try to poison training data, embed hidden instructions in documents or images, or exploit conversational AI to bypass controls. If you rely on AI for liveness detection, risk assessment or any other kind of access control, this kind of attack is going to become very familiar to you.
These threats are systemic, scalable, and can be very hard to detect.
Why human vs machine matters
It’s helpful to break down the challenge into two questions:
- Is this a human? Telling a real person from replay attacks or machine-generated content is essential. The biometric liveness check is the standard answer to this question today, but that won’t be true forever.
- Is it the right human? Once you establish that a person is present, you need to bind them to an account or entitlement. Today, this tends to be a picture of a government photo ID.
This matters because identity verification is no longer just something regulated firms need to worry about. It’s increasingly a matter of good practice, regardless of your industry.
One of our customers, Carefree, arranges free hotel breaks for unpaid carers, and uses this kind of verification to control fraud risk in the process.
Regulators like the FCA can fine you, but the bigger point is that fraud losses cut across every sector. Whether you’re moving money or managing benefits, the risk is the same.
Digital identity: progress, but uneven
The ultimate answer to all of this is of course, digital ID. Cryptographically secure credentials are no match for even the most sophisticated fakery.
Government frameworks such as the UK’s Digital Identity and Attributes Trust Framework (DIATF), or the EU’s eIDAS 2.0, are important steps. They establish assurance levels and shared standards, and they imagine a future where trusted providers can validate credentials for private sector purposes.
Some initiatives are further along than others. In the UK, millions of residents now rely on eVisas, instead of physical residence permits. There’s four million of them in circulation already, but acceptance is patchy, mostly because until recently there has been no standardised way to validate an eVisa without manually typing numbers into the government website.
Digital ID will solve this challenge, eventually. But there is a decade of work to be done both on the issuance and acceptance side before it can be anyone’s go-to solution.

What can fill the gap today?
If digital ID can’t solve the problem yet, what can? Two options often surface:
- Authoritative sources. Where evidence can be checked against a trusted source like a government or bank database, it should be. For example, Vouchsafe checks every proof of age card with the issuer, and checks eVisas with GOV.UK to cut the risk of accepting fakes to essentially zero. But coverage and latency are uneven, and not every attribute is available via a trusted API.
- Credit bureaus and data brokers. These are helpful for building a case and a risk score; detecting velocity, linkages, and anomalies, but synthetic identities can still build apparently “clean” credit files over time.
Neither alone is enough.
A blended, layered model
The most pragmatic answer today is layered assurance:
- Use digital proofs where you can. An eVisa or certified digital identity should be consumed directly.
- Bind other evidence to real humans. Run liveness checks, enforce in-app capture, and apply device attestation so the credential is tied to the session.
- Add trusted data to the case where you can. Supplement authoritative sources with bureau intelligence, sanctions screening, and behavioural analytics. Look for consistency across signals rather than relying on a single truth.
- Sort by risk. Apply friction proportionately. Low-risk interactions may pass with passive signals, while high-value transactions trigger step-ups or human review.
- Mitigate AI injection risks. Regular adversarial testing, provenance checks, and defence-in-depth reduce the chance that a single compromised AI model undermines the entire IAM stack.
Perfect security, perfect usability
AI fraud has truly changed the balance. Criminals can now scale deception faster than ever before.
Digital identity is the right long-term fix, but adoption remains uneven across sectors and jurisdictions. Until it is universal, the best strategy for enterprises is to combine digital evidence with layers of signals in a risk-adaptive IAM architecture.
If you’re a leader in banking, healthcare, insurance, and critical infrastructure, this is more than a fraud issue; it is a governance and compliance imperative. Regulators expect effective controls. Customers expect seamless access. Perfect security, perfect usability.
The only sustainable answer is a composable IAM strategy that accepts strong credentials when they exist, resists AI injection and deepfake attacks, and falls back gracefully to layered assurance when they do not.
Proving who’s real has never been harder, or more urgent.
For a conversation about practical steps to raise your identity assurance, contact me at jaye@vouchsafe.id.