Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI-driven fraud makes “proof” more important than “documents”
AI-driven fraud makes “proof” more important than “documents”
For years, remote customer onboarding relied on a straightforward premise: forging identity documents and biometric evidence is difficult and expensive. If a user provided a government-issued ID and a matching live selfie, platforms assumed the person on the other side of the screen was real. Deepfakes don’t break KYC rules—they break that underlying assumption. The uncomfortable truth is that KYC didn’t get weaker — the evidence did.
The rise of generative AI has fundamentally altered the economics of digital forgery. Creating hyper-realistic fake documents or synthetic video no longer requires a state-sponsored lab; it reduces the effort required to produce convincing synthetic media and improves criminals’ ability to scale deception (2). This means AI-driven fraud is shifting from a theoretical threat to a scalable operational fraud problem. We are entering an era where static identity documents and visual evidence are becoming inherently less trustworthy for financial institutions.
For financial institutions, the issue isn’t theoretical. Identity verification systems built around document verification and selfie checks were designed for a world where forgery was expensive. Generative AI tools and machine learning models have flipped that cost curve. The result is more identity fraud, more identity theft, and more pressure on fraud prevention and fraud detection teams—because the “document = evidence” assumption no longer holds.
This does not mean identity verification is obsolete. But it does mean that relying solely on document verification to prevent identity theft is a losing strategy.
In practice, the biggest failures happen at account opening, where identity verification systems still treat identity documents as ground truth. When AI-generated and forged documents look indistinguishable from genuine documents, the same workflow can enable identity theft and synthetic identity fraud. That’s why modern fraud detection is shifting toward cross-checking multiple data points instead of trusting a single file.
When bad actors can generate pixel-perfect artifacts on demand, the defense has to shift from collecting files to relying on cryptographically secure proof.
The assumption that breaks first: “documents are evidence”
Historically, possessing an identity document was strong evidence of an identity. Now, AI-generated fake IDs and digital forgeries are challenging that baseline.
The threat is not just the fakes themselves, but the phenomenon known as the “liar’s dividend.” When convincing fakes become easier to produce, even genuine documents become easier to dispute, and trust in all visual media degrades (1).
Criminal use of deepfakes and AI-generated fraud is now a recognized part of the online fraud toolbox (2). The very artifacts we used to establish trust are now the primary vectors for exploiting it.
Where AI-generated fraud breaks document verification and selfie checks
To understand the shift, fraud prevention teams need to look at exactly how and where the legacy onboarding stack fails.
Fake documents at scale (the “paper wall” problem)
Identity theft at account opening: why fake IDs still work
This is where document-centric identity verification systems break: they assume a convincing document implies a real person. Document fraud used to open accounts is a well-known attack vector (11). But the effort required to produce convincing forgeries has fallen. Instead of a criminal ring meticulously forging physical passports, generative AI allows attackers to generate at scale and iterate quickly.
In practice, this isn’t just about “fake documents.” It’s about fake IDs and forged documents that look indistinguishable from genuine documents in a screenshot-based review. That’s why digital forgeries are now a growing operational risk for document-centric identity verification.
This creates a “paper wall” of synthetic identities designed to strain account opening workflows. As criminal tooling improves, document-centric processes face greater operational pressure — which is why institutions need layered controls and stronger governance, not just better image checks (2)(3).
Liveness detection, biometric verification, and presentation attacks
To counter document spoofing, the industry leaned heavily into biometric verification and liveness detection. The logic was sound: require the user to blink, turn their head, or speak to prove they are a live human.
However, presentation attacks—where an attacker uses a mask, a screen replay, or a deepfake to trick the camera—are a recognized and testable threat class (5). Defenses against these presentation attacks exist, but they must be continuously evaluated. Remote customer onboarding governance expectations mandate that institutions understand the limitations of their liveness tools (3), treating biometric and document checks as risk signals within a controlled process — with documented performance limits and escalation paths (3)(7).
When the verifier becomes the weak link (workflow + governance)
The real risk of AI-driven fraud is not only the deepfakes themselves; it is over-trusting a single artifact within a complex compliance workflow.
If a platform relies entirely on a visual document check, the verifier becomes the weak link. The response is not to abandon these checks, but to acknowledge that verification must become multi-layered and auditable. Regulatory guidelines emphasize that remote customer onboarding requires robust governance, monitoring, and risk-based measures (3). Over-correcting with blunt controls also increases false positives, which hurts legitimate users and legitimate customers—exactly the group onboarding programs are supposed to serve.
The response: shift identity verification systems from documents to proof
To adapt, the architecture of trust must evolve. We have to move away from treating a static document as the ultimate source of truth, and start relying on cryptographic proof.
Proofs are about verification outcomes, not file custody
In an operational context, a proof is a verification outcome you can mathematically audit. The transition to proof-based systems often involves zero knowledge proofs.
In plain English, a zero-knowledge proof allows one party to prove a statement is true without revealing the underlying data (12). The verifier gets a mathematically verifiable outcome, not a file to warehouse. Over time, that reduces how much raw identity data needs to be copied and retained inside the institution (8).
Governance and auditability: what regulators actually care about
Switching to new technology does not erase regulatory obligations. Regulatory bodies are less concerned with the novelty of generative AI and more focused on how financial institutions manage the resulting risk. Failure to adapt can lead to regulatory penalties and compliance failures.
AI also introduces governance risks that have nothing to do with deepfakes. Many decision systems behave like a “black box,” which makes it hard to explain why an application was approved or rejected. That’s why explainable AI (XAI), regular audits, and documented controls for bias, fairness, and transparency matter—especially when identity decisions can trigger AML risk, escalation paths, or denial of service. The point isn’t to “make AI decide”—it’s to make AI explainable, auditable, and accountable.
Regulators look for three primary things in a remote onboarding program:
Governance and oversight of onboarding solutions
Firms must have clear policies on how onboarding solutions are tested, deployed, and monitored for effectiveness against emerging threats (3).
Risk-based controls and monitoring
Regulators expect dynamic identity assurance models that adjust friction based on the specific risk of the transaction or user profile (4)(7).
Audit evidence expectations
Institutions must have the ability to prove exactly how an identity decision was made, what signals were evaluated, and when, without necessarily storing the raw biometric data indefinitely. That audit trail is also what lets teams test for bias and prove controls are working over time.
Multi-layered identity verification: make fraud more expensive than compliance
If visual evidence is degrading in reliability, the solution is signal diversity. The goal of a modern identity verification system is to layer controls so deeply that the cost of spoofing them exceeds the potential payout.
Layer 1: document + biometric checks (but treated as inputs, not truth)
Document verification systems and biometric verification are still necessary. However, they must be treated as inputs into a broader AI fraud detection risk model, rather than the final arbiter of truth. Platforms must actively evaluate their tools against known presentation attacks and liveness spoofing techniques (5)(7).
Layer 2: device fingerprinting, contextual data, and multiple data points
Contextual data often reveals what a deepfake tries to hide. Signals like IP address patterns, device identifiers, geolocation, and session context can support risk-based decisions when visual artefacts are unreliable (6). If a hyper-realistic selfie originates from a known fraud infrastructure, the visual fidelity of the image ceases to matter.
Layer 3: behavioral biometrics and pattern detection
Beyond static device data, the industry is increasingly using behavioral biometrics and behavioral analysis to evaluate how a user interacts with a platform. Identifying abnormal application patterns or typing cadences can flag automated bots or coached fraud operations. This is where behavioral analytics can catch automation that looks “real” on camera but behaves like a scripted system. These fraud patterns add a dynamic layer of pattern detection and anomaly detection that is incredibly difficult for generative AI to simulate accurately at scale.
Layer 4: continuous monitoring (post-onboarding)
Onboarding is not a one-time event. Because threat capabilities evolve rapidly, ongoing monitoring reduces reliance on the initial, one-time check. Digital identity is dynamic, and continuous evaluation ensures that the risk profile remains accurate long after the account is opened (4). This also ties into transaction monitoring: onboarding signals and post-onboarding behavior should feed the same risk engine so fraud patterns are caught early and continuously.
Why proofs reduce exposure (and why that matters under AI-driven fraud)
Stolen identity data can fuel more convincing impersonation and account-opening abuse. The less sensitive user data you collect and retain, the fewer opportunities exist for misuse — and the easier accountability becomes when auditors ask who accessed what, and why (8)(11). Consequently, platforms must rethink how they store sensitive data.
Storing fewer raw PII files means less confidentiality exposure and a reduced control burden (8). If a verifier relies on a cryptographically signed proof rather than a database of passport JPEGs, they significantly minimize the blast radius of data breaches.
Furthermore, shifting away from centralized honeypots of sensitive data makes the institution a less lucrative target for cybercriminals. While decentralized identity systems are not a cure-all, avoiding massive concentration risk is a structural necessity in a world where stolen data directly fuels the next wave of generative AI attacks (10).
Conclusion
AI fundamentally shifts the battleground of digital identity from collecting visual artifacts to demanding cryptographic assurance. As generative AI makes forging documents and deepfaking biometrics easier to produce at scale, the value of a digital identity wallet holding reusable, verifiable credentials becomes clear.
In practice, surviving this shift requires a combination of zero knowledge proofs, strict governance, and layered contextual signals.
The goal isn’t to add impossible friction to the user journey. The goal is to keep identity verification trustworthy in an environment where seeing is no longer believing. In AI-driven fraud, trust moves from what you can see to what you can verify.
Footnotes
(1) https://www.californialawreview.org/print/deep-fakes-a-looming-challenge-for-privacy-democracy-and-national-security
(2) https://www.europol.europa.eu/cms/sites/default/files/documents/Internet%20Organised%20Crime%20Threat%20Assessment%20IOCTA%202024.pdf
(3) https://www.eba.europa.eu/sites/default/files/document_library/Publications/Guidelines/2022/EBA-GL-2022-15%20GL%20on%20remote%20customer%20onboarding/1043884/Guidelines%20on%20the%20use%20of%20Remote%20Customer%20Onboarding%20Solutions.pdf
(4) https://www.fatf-gafi.org/content/dam/fatf-gafi/guidance/Guidance-on-Digital-Identity-report.pdf
(5) https://www.iso.org/standard/67381.html
(6) https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-63B-4.pdf
(7) https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=959881
(8) https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-122.pdf
(9) https://www.w3.org/TR/vc-data-model-2.0/
(10) https://www.enisa.europa.eu/sites/default/files/publications/ENISA%20Report%20-%20Digital%20Identity%20-%20Leveraging%20the%20SSI%20Concept%20to%20Build%20Trust.pdf
(11) https://www.ukfinance.org.uk/system/files/2025-05/UK%20Finance%20Annual%20Fraud%20report%202025.pdf
(12) https://csrc.nist.gov/glossary/term/zero_knowledge_proof