Nigeria’s AI Bill Has an Electoral Blind Spot — and the 2027 Campaign Has Already Started

Total
0
Shares
8 min read




Nigeria is preparing to pass Africa’s most consequential AI legislation. The bill classifies public administration AI as high-risk, mandates annual audits, and positions NITDA as Africa’s most powerful AI regulator. But there is a gap in the framework large enough to drive an election through: the bill says almost nothing about electoral AI — and the 2027 presidential campaign has already started.

Presidential elections are scheduled for 16 January 2027. In the eighteen months between now and polling day, Nigeria’s political ecosystem will be saturated with AI-generated content — deepfake videos, cloned audio in Yoruba, Igbo and Hausa, and coordinated synthetic propaganda distributed via WhatsApp at a scale that no existing institutional framework is equipped to handle. The question is not whether this will happen. It already is. A deepfake video of President Bola Tinubu responding to a fictitious war threat from Donald Trump circulated widely on X in early 2026; fact-checkers at Dubawa confirmed it was fabricated, identified by “unnatural body movements” that most viewers would not notice or pause to question.

Nigeria is passing a serious AI law. It is just not clear that law covers the threat that matters most in the next twelve months.

What the Bill Actually Says About High-Risk AI

Nigeria’s National Digital Economy and E-Governance Bill 2025 — commonly referred to as the AI Bill — creates Africa’s first binding, risk-tiered AI governance framework, enforced through the National Information Technology Development Agency (NITDA). The bill’s highest-risk category covers AI systems that make or support decisions affecting individual rights, economic access, or safety. Explicitly named: credit scoring, biometric identity verification, clinical diagnosis, and AI used in employment screening.

The bill also lists “public administration and automated decision-making” as a high-risk category. This is the provision that, in theory, could extend to electoral systems. INEC runs voter registration platforms, biometric verification systems, and result collation infrastructure. An expansive reading of the bill’s public administration clause would bring all of these within NITDA’s licensing and audit regime.

The problem is that no such reading has been articulated, much less enforced. The bill contains no explicit reference to elections, INEC, electoral AI, or synthetic political media. There is no provision requiring platforms to label AI-generated political content. There is no mandate requiring political parties to disclose AI tools used in campaign targeting. There is no liability framework for the creation or distribution of deepfakes designed to influence electoral outcomes. These are not minor oversights. They are the specific threat vectors that will be exploited in January 2027.

INEC’s AI Division: Real Progress, No Framework

In May 2025, INEC established a dedicated Artificial Intelligence Division — a genuine institutional step, and more than most African electoral commissions have done. The division’s stated mandate includes real-time monitoring and content verification, automating voter services, and using geospatial intelligence to improve polling unit allocation. These are meaningful operational improvements.

But the AI Division has no published operational framework. No disclosed budget. No documented partnerships with platform companies or fact-checking organisations. No protocol for what INEC will do when a deepfake of a candidate goes viral twelve hours before polling opens. The division was created before the regulatory infrastructure it needs to operate within exists.

“Before we talk about using AI in elections, we must clearly define what it will actually do,” Gbenga Sesan, executive director of Lagos-based digital rights organisation Paradigm Initiative, said in a public comment on electoral AI governance. Sesan’s concern is structural: when only political operatives understand and can deploy AI at scale, ordinary voters lose democratic agency — not through any single fraudulent act, but through the slow erosion of their ability to distinguish what is real.

The Kenya Lesson Nigeria Has Not Absorbed

The most documented case study of electoral deepfakes on the African continent is Kenya’s August 2022 general election — the contest between William Ruto and Raila Odinga that produced one of the continent’s most bitterly disputed results. Deepfake videos impersonating both candidates were shared across Facebook, X, and WhatsApp. Fabricated audio of phone calls purporting to document vote-rigging plans circulated widely. Synthetic images of artificially large rally crowds appeared on TikTok. Researchers estimated that deepfake videos reached approximately 4.3 million viewers, with over a third of surveyed voters reporting changed perceptions of a candidate based on what was later identified as manipulated content.

Kenya’s regulatory response was effectively zero. The Independent Electoral and Boundaries Commission (IEBC) produced no specific deepfakes framework. Kenya’s cybercrime legislation contained no provisions specifically addressing synthetic media creation or distribution. A 2022 academic analysis directly asked whether Kenyan law could combat deepfake harm and concluded the answer was no. That gap has not been substantially closed in the four years since.

Nigeria is aware of Kenya’s experience. Nigerian civil society groups have cited it repeatedly in public commentary. The Techpoint Africa analysis published this week — which broke the straight-news version of this story — directly references the Kenyan precedent. What has not happened is the translation of that awareness into binding legal architecture. The AI Bill’s passage, absent specific electoral provisions, will not change that.

The WhatsApp Vector That Nobody Has Answered

The highest-risk AI disinformation channel for Nigeria’s 2027 election is not X or Facebook. It is WhatsApp, and specifically WhatsApp voice notes in local languages.

AI can now automatically translate and synthesize speech in Yoruba, Igbo, Hausa, and Nigerian Pidgin with sufficient fidelity to produce convincing fake audio that sounds like a respected local political figure, community leader, or religious authority. Unlike a deepfake video, which requires production resources and carries visual tells, a synthetic voice note can be produced in minutes, is extremely difficult to verify without forensic tools most Nigerian users do not have access to, and travels at speed through private encrypted channels that no platform moderation system monitors in real time.

The AI Bill’s transparency and disclosure requirements — model cards, user-facing warnings for synthetic content — are designed for public-facing AI deployments with identifiable operators. They cannot reach a voice note fabricated on a private laptop and dropped into a WhatsApp group in Kano. No provision in the bill addresses this. No continental governance framework addresses it. It is the threat vector most likely to determine outcomes in contested constituencies — and it is currently completely unregulated.

What a Coherent Electoral AI Framework Would Require

The good news is that the architecture for a response exists within the bill’s existing structure. It would require three additions, none of which would require the legislation to be reopened:

First, NITDA should issue a sectoral guidance note explicitly classifying electoral systems — INEC’s voter registration, biometric verification, and result collation infrastructure — as high-risk AI systems subject to annual audit under the bill’s existing provisions. This requires no new law. It requires a ministerial determination.

Second, INEC’s AI Division should publish a binding electoral integrity protocol before the end of 2026: a publicly available code of conduct requiring all registered political parties to disclose AI tools used in campaign targeting, prohibiting synthetic media impersonation of candidates and electoral officials, and establishing a rapid-response escalation pathway to NITDA for verified deepfake incidents.

Third, Nigeria’s Cybercrimes (Prohibition, Prevention, etc.) Act should be amended — a process already under discussion — to explicitly criminalise the creation and distribution of AI-generated content designed to deceive voters, with graduated penalties that account for electoral timing.

None of these changes are technically complex. They are politically difficult, because they require the current government to establish accountability mechanisms that would equally bind its own campaign apparatus in 2026 and 2027.

The Regulatory Advantage That Is Slipping Away

Nigeria holds a structural advantage that no other African country currently has: a comprehensive AI governance bill, an established regulator with clear enforcement powers, and a detailed sectoral knowledge base built up through the bill’s multi-year consultation process. The continental AI governance architecture — the AU’s Continental AI Strategy, the African Declaration on AI adopted in Kigali in April 2025, the nascent Africa AI Council — is all still in formation. Nigeria could act faster and with more specificity than any continental framework currently allows.

That window will not stay open. The 2027 election campaign is already running. The deepfakes have already started. The WhatsApp voice notes will come. If Nigeria does not extend the logic of its AI governance framework — risk-tiering, mandatory audits, explainability obligations — to the electoral domain before the campaign intensifies in late 2026, the bill’s strongest provisions will govern credit scoring models in Lagos while synthetic audio in Kano decides who governs the country.

Nigeria is passing the right kind of law. It is just not yet passing enough of it.

— Technology Desk, BETAR.africa

You May Also Like