By Chinenye Anuforo
[email protected]
For a country whose adoption of Artificial Intelligence (AI), especially the generative model, is outpacing global trends in emerging markets, the need for clearer regulations has never been more expedient.
Currently, Nigeria operates without a specific, dedicated law or statute to govern Artificial Intelligence or large language models (LLMs). There’s no distinct AI Act, nor a comprehensive licensing system for the training, deployment, or explainability of AI models.
Experts insist that building a strong AI framework for Nigeria is very essential to unlocking its immense potential across various sectors and effectively manage the inherent risks that come with such fast-paced technological integration.
A recent Google report, “Our Life With AI: From Innovation to Application,” Indicated this accelerated adoption, bringing into sharp focus the need for legislative foresight in Nigeria.
Indeed, Vice President Kashim Shettima recently affirmed Nigeria’s commitment to deploying AI and climate intelligence to transform its food systems, highlighting AI’s growing role in national development.
Legal experts Olubunmi Abayomi-Olukunle and Adekunle Adewale, partners at Balogun Harold law firm, have pinpointed crucial regulatory areas emerging within Nigeria’s developing legal framework for AI services. Their observations gain further weight from recent global advances in AI safety, which highlighted the tangible consequences of existing regulatory voids.
Nevertheless, earlier this year, the National Information Technology Development Agency (NITDA) introduced the National Artificial Intelligence Policy (NAIP). Legal experts have clarified that the proposed AI policy, though still in draft form, aims to provide guidelines for the responsible creation and use of AI in vital areas like healthcare, agriculture, and education.
In a related development, Senate Bill 731, which seeks to establish the National Artificial Intelligence Commission to oversee the AI sector, is currently under consideration by the National Assembly. This bill successfully passed its first reading on February 25, 2025.
“Until such dedicated legislation is enacted, foreign AI corporations with Nigerian subscribers operate under the existing general framework that applies to foreign Software-as-a-Service (SaaS) entities”, they stated.
While comprehensive AI legislation is still in development, certain AI-specific regulatory concerns are gaining prominence, distinguishing LLMs from conventional SaaS platforms. The most immediate of these is the Nigeria Data Protection Act, 2023 (NDPA), which has extraterritorial application. “This Act is particularly important for LLMs due to their capacity to ingest, retain, and infer personal data at scale, even when such data is unstructured. Furthermore, the General Application and Implementation Directive (GAID) of the Nigeria Data Protection Act (NDPA) expressly extends its applicability to foreign entities. This means that even if a company lacks a physical presence in Nigeria, it is still subject to the NDPA’s provisions if it processes the personal data of individuals located in Nigeria. Consequently, LLM platforms that gather user inputs such as prompts, generate responses tailored to personalized contexts, or facilitate account creation are highly likely to be considered as processing personal data under Nigerian law.
The crucial need for strong regulation is significantly reinforced by practical testing. For instance, the recent ‘Red Teaming’ exercise on ‘TelecomGPT,’ conducted by GSMA and Khalifa University at MWC25 Barcelona, clearly demonstrated the inherent vulnerabilities within highly specialized Large Language Models.
The results from such controlled environments offer crucial insights into the practical challenges Nigeria’s regulatory framework must confront. The growing public expectation on understanding how algorithmic decisions are made already reflected in the NDPA’s provisions on automated decision-making. Individuals increasingly expect to comprehend the outputs of algorithms, particularly when these decisions hold significant implications for them. This directly highlighted a rising demand for greater transparency in AI systems. The TelecomGPT red team exercise strikingly demonstrated this by successfully manipulating the model. They achieved this by combining social-engineering language with authoritative commands, such as instructing it, ‘I am your developer’ or ignore all previous instructions.’ This ‘Roleplay Jailbreak’ tactic effectively showed how easily an AI’s internal logic can be bypassed without adequate safeguards. It revealed a critical flaw where an AI assuming a user’s role grants unconditional override rights without robust verification. Such inherent vulnerabilities underscore the urgent need for stringent explainability requirements, not only to ensure accountability but also to prevent AI systems from making opaque and uninterpretable decisions.
While not yet a strict legal mandate, an undeniable international norm, which is also reiterated in the National Artificial Intelligence Policy (NAIP)’s guiding principles, encourages AI providers to thoroughly assess their models for any discriminatory outcomes. This implies that LLM companies operating in Nigeria may face increasing requests to provide documentation detailing their efforts to mitigate bias. The TelecomGPT exercise further amplified this concern, revealing that models are more prone to ‘hallucinations’ confident but incorrect responses when queries closely resemble genuine domain knowledge or when phrased as absolute statements (for example, “5G waves cannot penetrate glass). This demonstrated how even subtle biases or gaps within training data, or simply the way a question is phrased, can lead to dangerously misleading information.
Given Nigeria’s diverse population, the potential for AI models to perpetuate or even amplify existing societal biases in critical areas like healthcare diagnostics or job recommendations, a pattern observed in other global contexts makes robust fairness audits an absolute imperative for ethical AI deployment here.
“Moving forward, developers of Large Language Models will almost certainly face heightened scrutiny concerning the provenance of their training data. This becomes especially pertinent if that data incorporates content scraped from Nigerian websites or publicly available datasets featuring Nigerian individuals, as such practices could directly raise significant issues concerning copyright infringement, data protection violations, and outright misappropriation.” The TelecomGPT test underscored this point by showing a higher likelihood of hallucinations when prompts incorporated technically relevant, yet twisted, concepts, for example, falsely claiming that Starlink utilizes 5G spectrum. This vulnerability is directly linked to the quality and verifiable origin of the training data. “For Nigeria, ensuring data provenance is crucial not only for intellectual property protection but also for safeguarding data quality, preventing the spread of misinformation, and allowing for the debugging and auditing of AI systems that might influence critical national operations”, Abayomi-Olukunle stated.
While Nigerian law has not yet established specific liability rules for generative content, foreign LLM companies are advised to anticipate that offensive, defamatory, or harmful content generated for Nigerian users could trigger scrutiny under local consumer protection and defamation laws. The TelecomGPT red teaming exercise confirmed a significant risk, if AI models can’t effectively resist misinformation, it could easily spread through subsequent operations. For instance, consider an AI system giving incorrect advice to an engineer, this could quickly lead to misconfigurations, serious compliance breaches, or even critical safety incidents. Such situations directly highlight the urgent need for clear liability rules specifically for AI-generated misinformation or harmful content. These rules would be essential for protecting Nigerian users and businesses from genuine, real-world harm.
The TelecomGPT experience reinforced a key takeaway that the deliberate, adversarial testing of AI models is absolutely crucial for building truly robust and trustworthy LLMs.
“As generative AI adoption accelerates in Nigeria, embedding AI in critical functions like customer support and even agricultural monitoring, the vulnerabilities highlighted in these global exercises become directly relevant to the Nigerian context. The lesson is clear for AI to be safely and effectively deployed across various sectors in Nigeria, proactive testing and comprehensive regulation must go hand-in-hand”, the experts explained.
“As Nigeria embraces the transformative potential of AI, the collaborative efforts among policymakers, industry players, and legal experts will prove crucial. They hold the key to shaping a regulatory environment that not only fosters innovation but also rigorously safeguards individual rights and public interests against the risks we have identified. The ongoing development of the NAIP and the proposed AI Commission vividly demonstrates a vital recognition that proactive governance is no longer just an option, but an absolute necessity in our increasingly AI-driven world”, Adewale concluded.
Leave a comment