AI research institutions across Africa

When the Liveness Check Fails: AI-Generated Biometric Fraud at 69% — What Nigerian Fintechs Must Do Now

69% of confirmed biometric fraud in Africa now involves AI-generated manipulation. Nigerian fintechs face a critical liability gap: CBN mandates liveness detection but has not defined what an adequate liveness system looks like in 2026.
Total
0
Shares
6 min read

The Attack Has Moved Inside the System

Smile ID’s methodology matters here. The findings are drawn from over 200 million identity verification checks across 35 countries and 37 industries. This is not a survey or a model — it is observed production data.

What it shows is a fundamental shift in fraud architecture. Authentication fraud now occurs five times more often than onboarding fraud. Fraudsters are not trying to break in as fake people — they are operating as verified people. In a single month in 2025, one fraud syndicate used 200 stolen facial identities to launch over 160,000 verification attacks. Smile ID also detected over 100,000 injection-style attacks monthly — attacks that bypass the device camera entirely, feeding synthetic images directly into verification pipelines through emulators.

For Nigerian fintechs, this structural shift has an uncomfortable implication: the verification system you passed your CBN audit with may be the system currently being exploited.


The Liability Gap in Nigeria’s KYC Framework

The Central Bank of Nigeria’s 2023 KYC Guidelines mandate biometric verification — including liveness detection — as a core component of Customer Due Diligence. The Nigeria Data Protection Act (NDPA) 2023, which superseded the earlier NDPR, adds obligations around accountability and security of personal data used in automated processes.

Neither framework explicitly addresses what happens when AI-generated fraud defeats a mandated liveness check.

The CBN’s May 2025 AML circular introduced a clear liability hierarchy: principals are explicitly accountable for illicit transactions conducted through their agents. Non-compliance carries sanctions of up to ₦10 million per violation, plus potential licence revocation. The FCCPC’s digital lending rules add a parallel exposure — up to ₦100 million or 1% of annual turnover for consumer protection failures.

But these frameworks assume that KYC failure reflects negligence or non-implementation. They do not account for the scenario Smile ID’s report describes: a fintech that has correctly implemented a CBN-compliant liveness check, which is then defeated by a deepfake or injection attack that was sophisticated enough to bypass the current verification standard.

This is the liability gap. Compliance with the prescribed process does not guarantee immunity when the process is no longer technically adequate. And the burden of proving adequacy — of demonstrating that your liveness detection met a reasonable standard for the threat environment — sits with the institution, not the regulator.


Is the Regulator Equipped to Respond?

CBN has moved quickly on AML infrastructure — real-time alerts, agent banking accountability, mandatory BVN and NIN cross-referencing. These are meaningful steps. But they are fraud-response tools, not fraud-prevention standards calibrated to generative AI.

NITDA’s mandate under the NDPA includes oversight of automated decision-making systems that use personal data. AI-powered biometric verification clearly falls within that scope. Yet NITDA has not published specific technical standards for liveness detection in financial services — no threshold for passive vs. active liveness, no requirements around injection attack detection, no mandatory model update cadence.

The FCCPC’s enforcement remit is primarily consumer protection after harm has occurred — not proactive technical standard-setting.

The result is a regulatory space where the obligation to verify is clear, the technical bar for verification is not, and the liability when verification fails is unresolved.


What the Nigeria AI Bill Would Change — and What It Would Not

Nigeria’s forthcoming AI Bill, expected before the end of March, proposes a risk-based licensing regime administered by NITDA. AI systems used in financial services, credit scoring, and automated decision-making are expected to qualify as high-risk, requiring licences and annual impact assessments. Fines under the Bill are proposed at up to ₦10 million or 2% of annual gross revenue.

The 69% figure in Smile ID’s report does not weaken the case for risk-based AI regulation — if anything, it strengthens it. But the Bill’s architecture needs to address a critical distinction that current drafts have not resolved: the regulatory treatment of AI deployed by fintechs in their compliance products is entirely different from the regulatory treatment of AI deployed against fintechs by fraudsters.

A fintech’s biometric verification system is a high-risk AI system under any credible risk-based framework. But it is also the mechanism a fintech uses to comply with CBN KYC mandates. If the AI Bill imposes licensing and impact assessment requirements on biometric verification AI, without simultaneously defining what constitutes an adequate standard for that AI, it creates a compliance theatre problem: institutions can be licensed, audited, and still be running systems that the Smile ID data shows are being defeated at scale.

The technical distinction that matters here: passive liveness detection — which asks users to blink or turn their head — is no longer sufficient against injection attacks that feed synthetic images directly into verification pipelines, bypassing the device camera entirely. The difference between passive and active liveness, between camera-based and injection-resistant verification, is precisely the kind of specification a credible licensing regime must encode. Without it, a licence is a process audit, not a security standard.

The Bill needs a technical standards pillar — one that moves with the threat environment.


What Fintechs Should Be Doing Now

The Smile ID report’s operational recommendation is unambiguous: authentication controls need to match the sophistication of authentication-stage attacks. Device signals — hardware fingerprints, emulator detection, camera authenticity checks — now account for nearly 90% of fraud blocked in Smile ID’s own network.

For Nigerian fintechs, this is not just a product decision. It is a compliance risk decision. Operating a liveness detection system that lacks injection attack defence, in an environment where over 100,000 such attacks occur monthly continent-wide, creates a material liability exposure if a fraud loss occurs and a regulator or court asks whether the institution’s controls were adequate for the threat environment at the time.

The CBN mandates liveness detection. It does not yet define what an adequate liveness detection system looks like in 2026. That gap is the fintech sector’s problem until NITDA or CBN closes it — and given the pace of both AI fraud and Nigerian AI regulation, closing it sooner is in the industry’s direct interest.


Sources: Smile ID 2026 Digital Identity Fraud in Africa Report; CBN KYC Guidelines 2023; CBN AML Circular May 2025; Nigeria Data Protection Act 2023; FCCPC Digital Lending Rules 2025.

You May Also Like