Features
Sodiq Yusuf: Welcome to The World of AI Deepfakes and Digital Manipulation
Some years ago, if someone told you they saw a video of a prominent politician admitting to election rigging or a celebrity confessing to a crime, your first reaction might be shock. Today, your first instinct should be suspicion. Why? Because in the world we now live in, truth is no longer sacred. It’s editable. It’s downloadable. And sometimes, it’s entirely fictional.
This is, sadly, our reality, one where Artificial Intelligence (AI) is used not only to solve problems, but also to create new ones. Chief among them? Deepfakes.
How Deepfakes Came About
Deepfakes initially began as a source of amusement, often appearing as memes or parodies. People used apps like Reface to swap faces with movie characters or to insert their faces into viral skits. At that stage, it seemed harmless and even entertaining. However, things have taken a turn. Deepfakes have transformed from innocent entertainment into powerful tools of manipulation.
In Nigeria, for instance, imagine a fabricated video of a high-profile presidential candidate endorsing violence or rejecting election results. In a country already rife with ethnic tension, such a video wouldn’t be a simple prank; it could ignite serious conflict in a highly volatile environment.
The greatest danger of deepfakes is beyond the deception. Worst, it destroys our ability to trust anything. A few years ago, if a video surfaced showing a leader abusing power, we’d demand accountability. Now? We hesitate. We question its authenticity. We wait for “fact-checkers.” We ask, “Is this real?”
This scepticism, while healthy in moderation, can be exploited. In politics, a real scandal can now be dismissed as a deepfake. “That’s not me, that’s AI,” becomes a new kind of alibi. As a consequence, real crimes might go unpunished. Real truths might get buried.
Deepfakes don’t only make lies look like truth, they, ironically, make truth look like lies.
Manipulating Memory
Here’s even the scariest part: our brains aren’t built for this world. When we see a video, especially one with familiar faces and voices, we believe it. Psychologists call this the “truth bias.” We’re naturally inclined to accept what we see and hear as real, especially if it confirms what we already want to believe.
That’s why deepfakes are so effective. They fool our eyes, and our emotions too. And once something is believed emotionally, it becomes part of memory. Even after it’s debunked, the residue remains. Doubt lingers.
You may know the video was fake, but somewhere in your mind, you’re still wondering, “But what if it wasn’t?” That’s how deepfakes manipulate memory, disrupt public perception and personal recollections.
What Can Be Done?
There are things we can do. But we must move quickly and think wisely.
Digital Literacy Must Go Mainstream
We need to start teaching digital scepticism the way we teach basic hygiene. From secondary schools to NYSC camps, we must educate people on how to question what they see online. A digital detox is not enough. We need digital discernment.
Technology vs. Technology
The same AI that creates deepfakes can also detect them. Companies like Microsoft and Deeptrace are building forensic tools to analyse videos and verify authenticity. But here’s the catch: these tools are not yet widely accessible, especially in places like Nigeria. That needs to change.
Platform Responsibility
Social media platforms must do more than slap “misleading” tags. They need real-time detection systems and stronger consequences for malicious actors. If someone shares a deepfake that incites violence or damages reputations, there should be accountability.
Legal Frameworks
Nigeria needs robust legislation to tackle digital manipulation. Current laws are too outdated to handle this emerging threat. If someone creates a fake video of you and posts it online, what recourse do you really have? The law must evolve as fast as the tech does.
Cultural Immunity
Perhaps the most powerful defence is cultural. We must build a society that doesn’t run on rumours. We must normalise patience before outrage. Verification before vitriol. Because even if technology fails us, character must not.
A World Without Anchors
The danger of deepfakes is beyond technology. It’s about our identity. In a world where anyone can be impersonated, what does authenticity even mean? When your voice and face can be cloned, when your memories can be rewritten by code, what makes you… you?
If we’re not careful, we will enter an era where reality itself becomes negotiable. Where truth becomes a matter of opinion. And where the line between what happened and what was manufactured becomes permanently blurred.
In Yoruba folklore, there is a saying: “Ti a bá fi ojú kan wo àgbàlagbà, a máa rí ìtàn,” which loosely translates to “When you look closely at an elder, you see a story.” However, in today’s world, when you examine a face on your screen, you may not uncover a story at all. Instead, you might encounter a deception disguised in pixels.
We have always said, “Seeing is believing.” But that is no longer the case. Today, seeing is just the starting point for questioning what we encounter. In a world where lies can take on human appearances and fiction can sound like fact, our best chance is to sharpen our critical thinking skills and protect our mental well-being. If we lose our ability to recognise the truth, we do not just lose a tool; we lose our sense of direction.
Without a compass, every path may seem correct.