AI, journalism and fact-checking in Ghana; navigating the maze
The deployment of artificial intelligence (AI) in the news industry will transform newsroom practice, scale the work of media outlets, improve the efficiency of journalists, and impact the quality of work produced by the media.
Already, artificial intelligence is being used in the news production process from story discovery, story production to story distribution. Newsrooms are utilising machine learning to analyse massive data to discover patterns and journalists are creating templates so computers can write stories that are data-based to free them up from routine stories to be able to attend to larger and more complex projects. Also, in other places, newsrooms are using AI to personalise story recommendations to their audience.
Research has shown that media outlets have adopted AI as a result of factors including the “recent technological advancements, market pressures partially from the industry’s financial challenges, competitive dynamics with a focus on innovation, and the pervasive sense of uncertainty, hype, and hope surrounding AI.” However, the potential to increase newsroom efficiency has been identified to be the central motivator for the adoption of AI.
The American computer and cognitive scientist, John McCarthy is credited as the originator of the term “artificial intelligence” when he discussed the subject at a conference in 1955. According to him, AI is “the science and engineering of making intelligent machines, especially intelligent computer programs.” Since then, other scientists have proposed various ways of making machines intelligent like human beings.
However, the English mathematician Alan Turing is said to have suggested the notion of AI when he proposed in 1950 “The Imitation Game” as the ultimate test of whether a machine was intelligent and could imitate a human being so as to provide answers to questions that are indistinguishable from those of man. The phrase “The Turing Test” refers to the proposal made by Turing as a way of dealing with the question whether machines can think.
“I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 10, to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning. …I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machine thinking without expecting to be contradicted,” Turing, referred to as the father of modern computer science, wrote.
It is clear today that so much has changed since 1950 when Alan Turing proposed the “Imitation Game” test. There is currently a flurry of excitement everywhere with the introduction of Open AI’s ChatGPT, GPT-3 and GPT-4, which are all AI-powered generative language models developed to encourage learning and conversation. There are generally two distinct classifications under AI – Generative AI and Machine Learning. While the two share a common foundation, applications, methodologies, and outcomes, they are significantly different.
Machine Learning focuses on learning from data to make predictions or decisions. In other words, Machine Learning refers to the changes in systems that perform tasks associated with artificial intelligence. These tasks include recognition, diagnosis, planning, robot control, and prediction, among others. Generative AI, on the other hand, uses algorithms and models that generate new content such as text, photos, code, videos, 3D renderings, music, and more, which mimics human-like creative processes. The major distinction between the two methodologies is that while Generative AI algorithms are designed to create new data, Machine Learning algorithms analyse existing data.
It is worth noting that an artificial intelligence system learns from experience, uses the learning to reason, recognises images, solves complex problems, understands language and its nuances, and creates perspectives, among others. As evident across the world, every aspect of news production – story discovery, story production, and story distribution, can be affected by machine learning. The adoption of AI in Ghana’s news industry will have a wide range of benefits to journalists and media outlets on three primary levels, (a) production of texts, (b) interaction with the audience, and (c) performance of mundane tasks, including the writing of press releases. For instance, in the automatic production of content, AI Algorithms can convert structured data such as sports results and weather forecasts into informative and narrative texts that can lead to the production of stories with or without the intervention of journalists.
The Paris Charter on AI and Journalism adopted on November 10, 2023, captured the essence of the age when it said: “AI, spanning from basic automation to analytical and creative systems, introduces a new category of technologies with an unparalleled capacity to intersect with human thought, knowledge, and creativity. It represents a considerable shift in information gathering, truth-seeking, storytelling, and the dissemination of ideas. As such, it will profoundly alter the technical, economic and social conditions of journalism and editorial practice.”
Similarly, in the area of fact-checking, AI tools can help Ghanaian fact-checking bodies and media outlets with the detection of trending topics for media and information literacy (MIL) interventions, comments moderation, collection of information, identification of mis-and disinformation, verification of mis-and disinformation content, and the automatic translation of texts and audios. When effectively employed, AI can make Ghanaian journalists and researchers more efficient and give them the space to pursue life-changing stories and interviews that cannot be generated by these AI-powered tools.
There are concerns that AI will replace news workers or the work they do across the world. At the moment, AI aids journalists but no one can guarantee that this will be the state of affairs in the coming years. What is true is that AI has sufficiently matured to replace the practice described as “armchair journalism” whereby journalists obtain news reports from the newsroom without going out to the field to interview the sources. It is these routine tasks such as the writing of press releases and other stories devoid of human emotions that would be faded by AI because these tools have been trained to perform such tasks – even better.
Although AI tools can write better stories, some tasks are best suited to humans and this includes situations where complex communication is needed or expert thinking is required. Ghanaian journalists should endeavour to sharpen their skills and learn how the available AI platforms operate by going beyond “she said,” and “he said” kind of news reporting. Putting the all-important human touch to your reports can distinguish the work of journalists from that of AI tools.
Today, better-resourced media organisations in Europe, America and Asia such as Associated Press and Bloomberg employ AI in their story production steps with automated content generation to improve the quality and speed of both their input and output. However, the majority of media outlets, especially smaller ones, make extensive use of AI products and infrastructure developed by major tech companies like Google, Amazon, and Microsoft. The newsrooms in Ghana can equally use third-party solutions from platform companies as a story discovery and reporting tool.
Again, media outlets, including the New York Times have invested in artificial intelligence by hiring people specialised in artificial intelligence, machine learning, data science and mobile engineering. In a 2016 memo, the Bloomberg Editor-in-Chief John Micklethwait, told his staff that: “automated journalism has the potential to make all our jobs more interesting…The time spent laboriously trying to chase down facts can be spent trying to explain them. We can impose order, transparency and rigour in a field which is something of a wild west at the moment.” This advice should be considered by Ghanaian journalists who are mindful of the great need to protect democratic structures in the country through the promotion of public accountability and good governance.
While encouraging the use of AI in the news production process in Ghana, we need to acknowledge that there is an army of people scheming right now to deploy the same tools to cause havoc around the world – to destabilise peaceful countries, defame political opponents, misinform the public, remove democratically elected governments, twist the arm of voters, and influence opinion in favour of their paymasters. These things should be expected and in the coming years, we will see many of these things around us. Notwithstanding the narrow nature of today’s artificial intelligence, there is a need for an ethical framework to guide AI use in journalism. The framework should focus on steps to evaluate quality of data and algorithms, analyse potential bias in models and ensure transparency in the use of AI-based tools.
Albeit its benefits, there are grave concerns and questions about the quality of the outputs created by these AI tools, the erosion of ethical principles and core values of journalism, and the challenge to the right to information. The Paris Charter on AI and Journalism noted that: “AI systems have the potential, depending on their design, governance and application, to revolutionise the global information landscape. However, they also present a structural challenge to the right to information. The right to information flows from the freedom to seek, receive and access reliable information. It is rooted in the international legal framework, including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, and the International Partnership for Information and Democracy. This right underpins the fundamental freedoms of opinion and expression.”
These ethical concerns can effectively be addressed by programmers of AI tools, Ghanaian media outlets, and policymakers backed by an appropriate legal infrastructure. Ghanaian media outlets need to be deliberate about the way they use platform AI to promote the public good. Additionally, media outlets and journalists need to be transparent to their readers when they use AI to produce or distribute their content – this is important. Media outlets need to draw the line between authentic and synthetic content generated with the help of AI tools. In other words, local media outlets should refrain from using AI-generated content mimicking real-world captures and recordings to mislead the public.
As part of efforts to develop an ethical framework, the Paris Charter on AI and Journalism has outlined ten core principles which can guide Ghanaian media outlets in their interaction with AI. These include; (a) journalism ethics guide the way media outlets and journalists use technology, (b) media outlets prioritise human agency, (c) AI systems used in journalism undergo prior, independent evaluation, (d) media outlets are always accountable for the content they publish, and (e) media outlets maintain transparency in their use of AI systems, among others. Again, it is suggested that developers of AI tools would need to credit sources of their information, compensate Ghanaian authors of content they use in training their tools and respect the intellectual property rights of holders.
To effectively participate in this age of AI, Ghanaian media outlets need to invest in artificial intelligence by hiring experts in data science and machine learning, enter into partnerships with tech companies that have developed AI products, build in-house AI products for use in their newsrooms subject to the availability of resources, and train their journalists on the operation of AI platforms developed by third-parties. We need to remember that a functioning democracy requires an informed public and Ghanaian journalists must help their audience to participate fully in public life without fear or favour. As always, quality journalism will be the tool to achieve this objective in the country.
I have no doubts in my mind that AI will enhance the capabilities of Ghanaian journalists, save them time to attend to serious and complex topics, improve their overall efficiency and increase the mass media industry’s productivity in the country. However, Ghanaian journalists, media outlets, programmers, and policymakers must play an active role in the governance of AI systems to ensure ethical compliance, respect for copyrights, payment of compensation to authors for their works used in training AI models, rent extraction by platform companies, and the promotion of the public good.
*****
The author, A. Kwabena Brakopowers, is a private legal practitioner, journalist, development communication practitioner, and fact-checking consultant who has written extensively on topics concerning information disorder, law, quality journalism, development communication, and international politics. You can reach him at Brakomen@outlook.com