South Africa draft national AI policy 2026 consultation — BETAR.africa

South Africa’s Draft National AI Policy 2026: Who It Protects and Who It Misses

South Africa has published its draft National AI Policy for public consultation. A detailed analysis of what it regulates, what it leaves out, and what it means for the tech sector.
Total
0
Shares
9 min read



South Africa has chosen a different path from Nigeria on AI regulation — and that choice is about to have real consequences for every financial services firm, health technology company, and telecoms operator doing business in Africa’s most developed economy. A 60-day public consultation on the country’s Draft National AI Policy is expected to open in March 2026. What comes out of it will define AI governance in South Africa for a generation.

On February 24, 2026, the Department of Communications and Digital Technologies (DCDT) briefed Parliament on the draft policy’s progress. Cabinet approval and formal gazetting are expected within weeks. When the consultation window opens, it will be the most consequential regulatory process in South Africa’s digital economy since the commencement of POPIA in 2021 — and, unlike that process, businesses will have a defined window in which to shape the outcome rather than simply comply with it.

The Architecture: No Single AI Authority

The central design decision in South Africa’s Draft National AI Policy is the one that will define every compliance conversation in the months ahead: there will be no dedicated AI regulator. South Africa has explicitly chosen a sector-specific, multi-regulator model in which AI governance is embedded within the existing mandates of established regulatory bodies.

Under this framework, the Financial Sector Conduct Authority (FSCA) and the Prudential Authority will govern AI applications in financial services. The South African Health Products Regulatory Authority (SAHPRA) and the Health Professions Council will oversee AI in clinical settings. The Independent Communications Authority of South Africa (ICASA) will carry responsibility for AI in telecoms and broadcasting. The Information Regulator — already the country’s data protection enforcement body under POPIA — will oversee AI systems that process personal information, effectively becoming the closest thing the policy creates to a horizontal AI oversight function.

This is a deliberate departure from the European model, where the EU AI Act creates a tiered horizontal framework applicable across all sectors. It is also a departure from the direction Nigeria has taken: the National Digital Economy and E-Governance Bill 2025, which has cleared both chambers of the National Assembly and awaits presidential assent, designates NITDA as a single AI enforcement authority with cross-sectoral powers. South Africa is betting instead on regulatory depth over regulatory breadth — on sector experts applying AI principles within their existing domains rather than a new agency building AI expertise from scratch.

The logic is defensible. The FSCA has spent three years building supervisory capacity over crypto assets and is familiar with the mechanics of technology risk in financial services. SAHPRA understands the clinical validation standards that make AI-assisted diagnosis governance tractable. These bodies have enforcement relationships, institutional knowledge, and existing legal frameworks that a new AI authority would spend years developing. The risk is coherence: when an AI system operates across sector boundaries — a health insurer using AI to price risk, or a telecoms operator using AI for credit scoring — jurisdiction will be genuinely ambiguous, and businesses will face compliance obligations to multiple regulators simultaneously.

What It Means for Financial Services

For financial services firms, the multi-regulator model means the FSCA becomes the primary AI interlocutor — and the FSCA’s track record on technology regulation is instructive. The authority’s CASP licensing regime for crypto assets, launched in June 2023, delivered 300 approvals from 512 applications by December 2025: a pace and rigour that made South Africa the continent’s most credible digital asset market. That institutional confidence will inform how the FSCA approaches AI oversight.

Under the draft policy, financial services AI systems that make or influence credit decisions, risk assessments, fraud determinations, or investment recommendations will likely fall into higher-risk tiers requiring documented impact assessments, explainability standards, and ongoing monitoring obligations. The draft’s language on accountability — which tracks the principle that AI decision-making in regulated activities must be attributable to a licensed entity — will create pressure on third-party model vendors as well as the institutions that deploy them. Banks and insurers that purchase AI decisioning infrastructure from fintechs or technology vendors will need to understand what obligations flow to them as deployers.

The interaction with POPIA adds complexity. South Africa already has one of the continent’s most developed data protection regimes, and the Information Regulator has demonstrated willingness to enforce it. The draft policy explicitly builds on POPIA’s accountability and purpose limitation principles, treating them as foundational to AI governance. Any financial services firm that has completed POPIA compliance work has a head start — the conceptual apparatus of data impact assessments, data minimisation, and individual rights is directly applicable to the AI policy’s framework. The firms that will struggle are those that treated POPIA compliance as a documentation exercise rather than a genuine operational change.

Health Technology: The Clinical AI Governance Question

For health technology companies, the policy’s implications are acute. AI-assisted diagnostic tools, clinical decision support systems, and patient risk stratification platforms already operate in South African hospitals and clinics — largely under voluntary guidelines from the Health Professions Council and the South African Medical Association rather than binding regulation. The draft policy will change that.

SAHPRA’s mandate covers medical devices, and the authority has been moving incrementally toward treating software-as-medical-device (SaMD) under its existing device approval framework. The AI policy is expected to accelerate and formalise that process. Health AI companies that have been operating under the assumption that regulatory approval is a future concern — not a present one — should treat the consultation period as the moment to engage before compliance obligations are set rather than after.

The key governance question for clinical AI is validation: how a model’s performance must be documented, tested on local population data, and monitored for drift after deployment. South Africa’s patient demographics differ materially from the datasets on which most commercial AI diagnostic tools were trained, predominantly on European and North American cohorts. The consultation is an opportunity for local health technology companies to shape validation standards that are practically achievable and scientifically credible, rather than inheriting standards designed for foreign regulatory contexts.

South Africa vs. Nigeria: Two Philosophies, One Continent

Placed side by side, South Africa’s and Nigeria’s AI governance approaches reflect the two dominant models now competing for influence across African regulatory policy.

Nigeria’s approach — binding legislation, a single enforcement authority with inspection powers, a risk-tier framework, and financial penalties scaled to revenue — is comprehensive, fast-moving, and architecturally sophisticated. Its weakness is proportionality: the National Digital Economy Bill contains no startup-friendly provisions, no sandbox mechanisms, and no phased implementation that gives early-stage companies time to build compliant systems. The compliance cost burden falls uniformly across the ecosystem, and the penalty structure was explicitly designed to be punishing at enterprise scale.

South Africa’s approach — embedded in existing regulatory mandates, built on POPIA’s data protection foundation, developed through public consultation rather than parliamentary fast-track — is slower and more deliberate. Its strength is legitimacy and sector depth. Its risk is the jurisdictional fragmentation that comes with five or six regulators each interpreting AI governance through different sector lenses, and the compliance complexity that imposes on businesses operating across those sectors.

For the broader African regulatory environment, the comparative experiment matters. If South Africa’s multi-regulator model produces clear, actionable compliance pathways with proportionate treatment of startups, it may become the template for jurisdictions including Kenya, Ghana, and Rwanda, which are still in early-stage AI policy development. If Nigeria’s centralised model delivers fast licensing and consistent enforcement, it will demonstrate that a single-authority approach is viable at scale in a developing economy. The two largest African economies are running different experiments, and the continent’s regulators are watching.

The 60-Day Window: What Businesses Must Do

When the consultation notice is gazetted, eligible parties — including companies, industry associations, civil society organisations, and academic institutions — will have 60 days to submit written comments to the DCDT. That window, expected to run approximately from late March to late May 2026, is the last substantive opportunity to shape AI liability standards, compliance thresholds, and sector-specific implementation guidance before policy becomes regulation.

Three things matter most for business submissions:

Proportionality provisions for startups and SMEs. The draft policy in its current form does not contain a small-business carve-out equivalent. Companies with under 50 employees or below a defined revenue threshold should advocate explicitly for lighter-touch obligations — not exemptions, but phased timelines and simplified compliance pathways that allow innovation without reckless disregard for risk.

Clarity on cross-sector jurisdiction. Any business that uses AI across sector boundaries needs to push for a clear lead-regulator designation model — a mechanism by which a company with obligations to both the FSCA and the Information Regulator, for example, can identify a primary compliance interlocutor rather than managing parallel regulatory relationships.

Local validation standards for AI models trained on non-African data. This applies particularly to health technology and financial services AI, where models trained on foreign datasets may not perform consistently on South African populations. The consultation is the right place to propose standards that require, and support, local validation testing rather than simply accepting international certification at face value.

Baker McKenzie’s South Africa practice and Fasken Martineau’s Johannesburg office have both published client briefings flagging the consultation as a priority engagement. Industry associations including the South African Institute of Chartered Accountants (SAICA) and the South African Venture Capital and Private Equity Association (SAVCA) are likely to coordinate sector submissions. Companies are best placed to either contribute to these industry submissions or file independently if their exposure is specific enough to warrant it.

The 60 days will pass quickly. For financial services firms, health technology companies, and AI-native startups operating in South Africa, the calculation is straightforward: engage now and shape the rules, or comply later with rules you did not write.

You May Also Like