|Kabir Adeyemo

There is a growing unease in Nigeria’s digital space, and few have captured it as sharply as tech lawyer, Kabir Adeyemo, who warns that the speed of artificial intelligence development is far ahead of the legal. brakes meant to keep society safe. “AI is no longer a distant future; it is the system making decisions around you before you even realize you are part of its data,” he cautions, setting the tone for a conversation that feels both urgent and unsettling. His words land with a sense that something deeper lies beneath, something that demands closer attention as Nigeria moves into an era shaped by algorithms rather than policies.

As the discussion unfolds, Kabir starts by drawing a line between innovation and regulation, a gap that has become more visible with high-risk AI applications emerging across finance, education, identity verification, and political communication. He notes that Nigeria’s legal structure, though evolving, still leans on broader instruments such as the Nigerian Data Protection Act 2023, which governs how personal data is processed by automated systems. But he stresses that while this law offers a foundation, it does not fully anticipate the complex ways AI can distort truth, power, and rights. This shift creates a subtle tension, and it is this tension that pushes him to examine the legal risks behind AI’s rising influence.

From here, his attention turns to the growing problem of AI-manipulated content. Deepfakes, he explains, present a unique danger because they not only mislead the public but also threaten democratic order. He references how global frameworks like the International Covenant on Civil and Political Rights impose limits on expressions that can disrupt public order. Yet, he observes that Nigeria lacks explicit rules mandating the labeling of AI-generated media. This gap worries him because such content can spread during elections, targeting citizens in ways that are hard to trace. The thought lingers, creating a sense that a larger threat is emerging, one that could escalate if left unchecked.

Just as the conversation begins to settle, Kabir shifts to a different but connected problem: accountability. AI systems, he explains, are not just tools; they have become actors capable of harming individuals and institutions. He highlights cases in global markets where major tech players like Meta, Google, and OpenAI face scrutiny over algorithmic harm. This international pressure, he says, is a reminder that Nigeria must introduce rules that make digital platforms responsible when their models enable fraud, discrimination, or political interference. His tone sharpens as he points out that without a clear duty of care, platforms may continue to push risks downward to ordinary Nigerians who have little power to defend themselves.

The Push for Stronger Accountability Mechanisms

This focus on responsibility leads him to examine the fragile enforcement system. Nigeria’s regulators, from NITDA to the NCC, are adapting, but Kabir stresses that enforcement cannot exist only within the country. Cybercrime today moves across borders with the same speed as shared data, and AI only amplifies this. He explains that frameworks like the Budapest Convention on Cybercrime offer cooperation tools that Nigeria increasingly relies on, especially in tracking AI-assisted fraud, identity theft, and coordinated digital attacks. The way he frames it makes it clear that AI does not respect boundaries, and neither should the laws designed to manage it.

From cross-border enforcement, he moves naturally into the subject of governance, noting that the world now watches how regulators respond to the actions of dominant AI developers. He mentions how the European Union, United States, and China have taken different approaches to AI governance, with the EU’s AI Act standing out as a comprehensive blueprint. He suggests that Nigeria must extract lessons from these global regulations, not by copying them, but by adapting their core principles, transparency, risk classification, and user protection to fit Nigeria’s unique digital environment. His argument builds, shaping a picture of a country standing at a crossroads.

A Call for a Nigerian-Rooted AI Governance Regulations.

Then comes his stronger point: the need for AI laws that reflect Nigeria’s realities. Kabir explains that Nigeria’s digital ecosystem is marked by young innovators, weak enforcement capacity, and fast-moving informal markets. Any governance model that does not consider these realities will fail. He proposes clearer standards for testing high-risk AI, stronger disclosure requirements for automated decisions, and a monitoring structure that ensures companies do not hide behind “black-box” algorithms. His view makes it clear that AI governance must be proactive, not reactive.

As the discussion deepens, he brings up recent global incidents, AI tools generating harmful political content, biometric systems misidentifying individuals, and automated systems enabling financial scams. He uses these examples to show that Nigeria is not insulated from global AI failures. Kabir’s voice grows firmer as he explains that what happens elsewhere today will happen here tomorrow if the right governance tools are not in place. The implication is stark, and it pushes the narrative toward a natural conclusion.

With the issues laid bare, Kabir returns to his central message: the future of AI in Nigeria depends on striking a balance between innovation and public safety. He insists that AI must grow, but it must grow responsibly. He emphasizes that Nigeria’s legal system must evolve quickly, drawing from global best practices while remaining tailored to the country’s legal traditions, technological realities, and socio-political context. He ends by urging policymakers, companies, and citizens to treat AI governance not as a distant policy debate but as an immediate national priority.

In the final moment, there is a quiet but powerful sense that the message is not just about law but about the direction Nigeria chooses to take in a world increasingly shaped by intelligent machines. Kabir’s insights do not merely warn; they guide, reminding the country that while AI may be rewriting the digital world at breathtaking speed, the law must be ready to stand firm where human rights, democratic trust, and public safety are at stake.

About Author

Show Buttons
Hide Buttons