By Kemol King
Vice President, Bharrat Jagdeo alleged during a press conference in December that some editorials published by Stabroek News appear to be heavily generated by OpenAI’s artificial intelligence product ChatGPT. First, it would be a shame for a newspaper to try to pass off an entirely artificially generated body of work as its intellectual property.
Still, online tools often used to detect whether a body of writing is artificially generated are not foolproof. They can be tested to detect more AI generation than is actually there. They are trained to detect patterns but lack human ingenuity.
This is not a commentary on the meat of Jagdeo’s accusation against Stabroek News. Rather, this editorial seeks to inject nuance into the discourse about using AI products like ChatGPT by Guyanese media professionals.
It is also a call to local media professionals to engage in clear and fruitful dialogue on the need to set boundaries on their use of ChatGPT and similar AI products referred to as Generative AI. These products are known for their ability to take human prompts and create new content in the form of text, code, and imagery.
ChatGPT and other similar products that can generate text by predicting sequences of words based on context are called large language models (LLMs).
Let’s be honest. Journalists, like everyone else, are using Generative AI. The use of these products in journalism is not inherently problematic. On the contrary, when used responsibly, AI can be a revolutionary tool that improves quality and efficiency.
AI-powered proofreading tools can streamline the editing process by flagging grammatical errors and subpar syntax. Journalists can also face writer’s block, particularly when writing feature stories. AI tools can provide a starting point for a writer by generating outlines or suggesting structures for articles. AI tools are excellent timesavers for transcribing and preparing summaries from interviews and speeches.
AI tools, including long-established software like Google Sheets and Microsoft Excel, have aided journalists with data analysis before ChatGPT ever existed. However, newer AI technologies can take this process to new heights by identifying patterns and generating insights with remarkable efficiency. This can save journalists time, so they can focus on investigative reporting and storytelling.
ChatGPT and its competitors can assist in fact-checking by cross-referencing information. It is important to note, however, that these tools cannot replace human verification; they are a supplement, not a substitute.
It is crucial however to draw a clear line between using AI to enrich the reporting process and allowing it to undermine the spirit of journalism. The media’s primary responsibility is to serve the public with accurate, credible, and thoughtful reporting. This duty cannot be fulfilled by AI alone; it requires human input.
AI tools like ChatGPT tend to lack cultural nuance and emotional intelligence, both of which are vital in storytelling and investigative journalism. Moreover, these tools cannot conduct interviews, establish relationships with sources, or hold a politician to account at a press briefing. Think of the role journalists played in covering the 2020 elections, a tragic period in this country’s history. During a pandemic that claimed over a thousand lives locally, political operatives tried repeatedly to promote unlawful election declarations and spreaded disinformation about the electoral process. The contributions of journalists in this period were necessary to resurface the breath of Guyana’s electoral democracy after a rigging machinery dragged its head underwater. AI could not substitute what journalists did in those five months.
Journalists need not worry about being replaced or rendered redundant. Instead, they can use AI tools to improve the efficiency and quality of journalistic work. What readers must rebuke is not AI use, but intellectual dishonesty. In this regard, serious thought must be dedicated to how media houses can use Generative AI with sophistication and responsibility.
Right now, lazily generated work is appearing all over the news, in letter pages and editorials. The most basic AI writing tools are often distinctive and predictable in their vocabularies and writing styles. They consist of poetic drivel stretched across paragraphs and paragraphs that, after 15 minutes of reading, teach readers nothing. What is going on is apparent to discerning readers, and is an insult to the intelligence of the people who rely on the media for accurate and thoughtful reporting.
Media houses should establish guidelines to guide the use of AI, principally to avoid scenarios where the technology is employed to create entire articles without human input. For example, guidelines could endorse the use of AI for tasks like data analysis, transcription, and grammar checks, while ensuring that artificially generated content and findings are always reviewed and approved by a human being. Investigative reporting, opinion pieces, and feature writing could remain primarily the domain of human journalists, as these pieces require critical thinking, empathy, and an understanding of cultural context.
In the long run, media houses could also consider training journalists to use AI tools effectively. Workshops and training sessions can help journalists understand the capabilities and limitations of AI. Such endeavors can enhance efficiency while ensuring that journalists remain at the forefront of content creation.
It is not a crime to use the modern age’s tools and technologies to do a better job. However, journalists must harness the potential of Generative AI responsibly.