How to spot deepfakes ahead of Ghana’s 2024 election
Artificial intelligence (AI) is getting better every minute, and this constant improvement is blurring the line between what’s authentic and what’s fake online. Ghana’s 2024 general election is a few days away, and malicious actors have already begun using generative AI to create misinformation and disinformation campaigns. Because of these, battling deepfakes or the aggressors has become more challenging than ever.
It is important to recognise that the devil here may not be the deepfakes themselves but rather the narrative surrounding them, which can undermine the integrity of Ghana’s upcoming election.
Deepfakes are artificial intelligence-generated images and videos that change people’s faces and bodies, mimic their voices, and make them say or do things they never did or said. The U.S.-based Poynter Institute has explained that these AI-generated images or voices are usually added to a video to distort the truth. Experts have identified three types of deepfakes: face swaps, lip syncs, and puppet masters.
Face swap deepfake is done when one projects the face of one person onto a video of another person, usually with a single image of the person you want to make the deepfake of. Several apps support people’s ability to ‘face-swap’ someone’s face. Also, lip sync occurs when one uses AI to adjust the mouth of a genuine video of someone to make it look like they are saying something else. This is usually done with an AI-generated audio or impersonator. Puppet master deepfakes, on the other hand, use AI to animate the entire head of a person.
These manipulated videos and images are rampant on social media. The Ukrainian President Volodymyr Zelenskyy was seen in a Facebook video published on April 23, 2022, with a white substance believed to be cocaine on his desk. It was later discovered that the Facebook video was altered.
Also, another clip showed U.S. President Joe Biden railing against a transgender woman, saying, “You will never be a real woman.” This was contained in an Instagram post made on February 5, 2023. No evidence of Biden making the remark was found. However, there was evidence of President Biden supporting transgender people.
Here in Ghana, when a video emerged on social media showing Vice President Dr Mahamudu Bawumia singing a Chinese song, many Ghanaians believed it, but only a few wondered. Similarly, a video of ex-President John Mahama calling Ghanaians “small-mind” went viral on X (formerly Twitter), which was later found to have been manipulated.
None of these happened, but many who saw the videos on social media believed. I will share a few clues to help you detect deepfakes as Ghanaians prepare to elect their next President and 275 lawmakers on December 7, 2024.
#1: Look for clues indicating when something is off
Some experts have pointed out that audio synchronisation, or lack thereof, is a significant clue one can use to detect a deepfake video. Find out if the audio syncs with the person’s mouth movement. If it does not, then you have a possible clue. I fact-checked one such video in which Nigerian singer Damini Ebunoluwa Ogulu, popularly called Burna Boy, encouraged Liberians to attend a particular event in their country. The audio and the movement of the musician’s mouth were not in sync. A further investigation I conducted using Google Reverse Image Search showed the video was a manipulated version of a 2022 lifestyle interview the singer granted to English comedian Amelia Dimoldenberg as part of her series Chicken Shop Date. Later, when I used the verification tool, InVid, it disclosed that the movement of the singer’s mouth and the voice lacked coordination. I will suggest you also look for the movement of other body parts, such as the eyes, head, and hand, for possible clues from manipulated videos.
#2: Conduct a lateral search to confirm or debunk a video’s accuracy
Searching for cues and clues to detect manipulated videos or images is never enough. As a serious fact-checker, you would have to conduct a “lateral search” to connect to other credible sites to verify what you have discovered instead of getting stuck on the site where you found the video. Explaining the need for a lateral search, the Pressbooks, a guide for student fact-checkers, noted that if the “Site is untrustworthy, then what the site says about itself is most likely untrustworthy, as well. And, even if the site is generally trustworthy, it is inclined to paint the most favourable picture of its expertise and credibility possible.” This is understandable because, as a professional fact-checker, whenever I have to evaluate a website’s content, I don’t spend much time on that site itself. I conduct searches online using reverse-image search tools, such as Google Reverse Image Search, TinEye, or Yandex, and read what other authoritative sources say about the site or the content being evaluated. The Pressbooks said professional fact-checkers “…open up many tabs in their browser, piercing together site they’re investigating. Many of the questions they ask are the same as the vertical readers scrolling up and down the pages of the sources they are evaluating. But unlike those readers, they realise that the truth is more likely to be found in the network of links to (and commentaries about) the site than in the site itself.” In short, check what other credible sources are saying about the site or the content you are evaluating.
#3: Look out for visual glitches
It is easy to spot glitches in AI-generated videos because technology may slip up when the person moves parts of his body. However, you may not see these cues when you view or play the video at its normal speed. You will need to slow down the footage before you will see the glitches. If it’s a YouTube video, the social media settings allow users to change the playback speed to 0.25. Experts identified a slight glitch after viewing a clip of the deepfake Tom Cruise when he turned back around to face the camera. I must advise that this glitch is not observable if the video is viewed at its normal speed, but when you slow it, you will see it. Sometimes, visual glitches can be seen when the person moves his head or rolls his eyes. In one video, actor Steve Buscemi’s face was projected onto fellow actor Jennifer Lawrence’s, a characteristic of a face-swap deepfake. Looking closely at the video, you will see glitches when the actor moves her head.
#4: Unusual lighting or shadows
Another indication of manipulated content is when the image or video has unusual lighting or shadows. Original videos or images usually have lightning in moderation and excerpt amateur images. It is important to monitor the shadow(s) around the person or in the video and image and identify any unusualness or distortion. Does the shadow look like the person in the video or image? Or do the shadows look like any objects visible in the video or the image? If you find any unusualness about the video or image, that should motivate you to widen your research. Unnatural motion in a video is one of the telltale patterns and subtle inconsistencies characteristic of deepfake images and videos.
#5: Mistrust online content
Spotting deepfakes on social media is one thing, and believing the content is another. What makes the difference is having a discerning social media user or electorate who would not believe the content of deepfakes until they have verified it from a credible and authentic source. The work of Ghanaian policymakers and technologists is to stay ahead of malicious actors during this election period and encourage the electorate to mistrust online content. The Senior Vice-President at KnowBe4 Africa, Anna Maria Collard, has argued that encouraging the public to mistrust online content is important to combating manipulated content. She said, “Fostering a culture of zero-trust mindset through cybersecurity mindfulness programs (CMP) helps to equip users to deal with deepfake and other AI-powered cyber threats that are difficult to defend against with technology alone.” But I see a challenge here because this suggestion might advance the politician’s broader strategy of creating widespread cynicism about the work of the institutions charged with unearthing the truth and telling it as it is. Therefore, the media has a significant role in their survival and in preserving the trust and integrity of this year’s and future Ghanaian elections. Nicos Vekiarides, the Chief Executive Officer at Attestiv, was emphatic that the, “Best defense against deepfakes is education and using available detection tools.” The Ghanaian Government, media, and independent fact-checking organisations should provide digital education to the electorate to watch out for deepfakes and other manipulated images in traditional and social media. Vekiarides further continued, “Just as consumers have learned to spot phishing attempts, voters must learn to spot disinformation.”
Conclusion
The Ghanaian Government needs to work swiftly to tackle issues of deepfake because events worldwide show that the phenomenon does not only affect politicians and celebrities. The emerging type, deepfake pornography, which happens when a person’s image is manipulated to appear that they are engaging in a sexual act, is fast becoming a major crisis.
Already, South Korea is experiencing what the BBC reported as “a digital sex crime epidemic.” The New York-based Reuters reported on August 30, 2024, that the police in South Korea have dealt with 297 deepfake sex crimes, with the majority of the victims and perpetrators being teenagers.
******
The author, A. Kwabena Brakopowers, is a private legal practitioner, a researcher on AI and synthetic media, a journalist, and a development communication practitioner who draws on his deep expertise in fact-checking and researching deepfakes and generative AI to help organisations navigate the opportunities and risks these game-changing technologies present. You can reach him at Brakomen@outlook.com