Generative AI is the final word disinformation amplifier | DW | 26.03.2024

Generative synthetic intelligence (GAI) provides a brand new dimension to the issue of disinformation. Freely obtainable and largely unregulated instruments make it potential for anybody to generate false info and pretend content material in huge portions. These embody imitating the voices of actual folks and creating pictures and movies which might be indistinguishable from actual ones.

But there may be additionally a constructive aspect. Used neatly, GAI can present a better variety of content material customers with reliable info, thereby counteracting disinformation.

To perceive the positives and negatives of GAI, it’s first necessary to know what AI is, and what’s so particular about generative AI.

What do machine studying, AI and generative AI imply?

Artificial intelligence refers to a group of concepts, applied sciences and strategies that relate to a pc system’s capability to carry out duties that usually require human intelligence. When we discuss AI within the context of journalism, we often imply machine studying (ML) as a sub area of AI.

In primary phrases, machine studying is the method of coaching a chunk of software program, referred to as a mannequin, to make helpful predictions or generate content material from information. The roots of machine studying are in statistics, which can be regarded as the artwork of extracting information from information. What machine studying does is to make use of information to reply questions. More formally, it refers to using algorithms that be taught patterns from information and might carry out duties with out being explicitly programmed to take action. Or in different phrases: they be taught.

A language mannequin (LM) is a machine studying mannequin that goals to foretell and generate believable language (pure or human-like language). To put it very merely, it is mainly a likelihood mannequin that, utilizing an information set and algorithm, predicts the subsequent phrase in a sentence primarily based on earlier phrases.

Such fashions are referred to as generative fashions or generative AI, as a result of they create new and authentic content material and information. Traditional AI, alternatively, focuses on performing preset duties utilizing preset algorithms, however does not create new content material.

When fashions are skilled on monumental quantities of information, their complexity and efficacy improve. Early language fashions might predict the likelihood of a single phrase whereas fashionable giant language fashions (LLMs) can predict the likelihood of sentences, paragraphs and even whole paperwork primarily based on patterns used up to now.

A key growth in language modeling was the introduction in 2017 of Transformers, a deep studying structure designed across the thought of consideration mechanisms. This innovation permits the mannequin to selectively deal with a very powerful a part of the enter for making the prediction, boosting a mannequin’s potential to seize essential info. The pc science portal Geeks for Geeks offers Google Streetview’s home quantity identification for instance of an consideration mechanism in pc imaginative and prescient that allows fashions to systematically establish sure parts of a picture for processing.

Attention mechanisms additionally made it potential to course of longer sequences by fixing reminiscence points encountered in earlier fashions. Transformers are the state-of-the-art structure for all kinds of language mannequin functions, reminiscent of translators and chatbots.

ChatGPT, the most effective recognized chatbot, relies on a language mannequin developed by OpenAI. It is constructed on the GPT (Generative Pre-trained Transformer) mannequin structure, and it’s recognized for its pure language processing capabilities.

Large language models are the algorithmic basis for chatbots like OpenAI’s ChatGPT

What does generative AI mean for disinformation?

Generative AI is the first technology to enter an area that was previously reserved for humans: the autonomous production of content in any form, and the understanding and creation of language and meaning.

And this is precisely what links generative AI to the topic of disinformation — the fact that, today, it is often impossible to tell if content originates from a human or a machine, and if we can trust what we read, see or hear.

Media users are beginning to understand that something is broken in their relation to media and are confused. “Some of the symptoms that now we have traditionally used to resolve we must always belief a chunk of knowledge have grow to be distorted,” Vinton G. Cerf, known as one of the “fathers of the web,” said in a 2024 video podcast by the international law firm Freshfields Bruckhaus Deringer.

Generative tools are different as they bypass many of the traditional principles of journalistic work, such as relying on trusted sources. We have to say goodbye to the idea that there are authors behind every text or a creator behind every piece of visual content. This connection no longer exists.

What are the risks of ChatGPT and open-source large language models?

Although generative AI tools are still unavailable in some countries because of their internet censorship laws and regulations, the launch of ChatGPT by OpenAI in November 2022 (and later on its alternatives) was a turning point. Now, a large part of the world’s internet users have access to these powerful tools and can use them according to their own purposes — whether positive or negative. It also means that through widespread use, the models can continue to learn and become better and even more powerful.

But the underlying LLM used by ChatGPT and Google’s Gemini (formerly Bard) are owned by their companies, that is they are proprietary models. This raises concerns about LLMs’ lack of transparency, the use of personal data for training purposes and limited accessibility. There’s also significant debate on the ability to use chatbots to produce disinformation and fake content.

While these two chatbots in particular have garnered significant attention, other powerful open-source large language models, the foundational technology behind these chatbots, are freely available.

Research by Democracy Reporting International, a Berlin-based organization promoting democracy, found these open-source LLMs, when managed by someone with the relevant coding skills, can rival the quality of products like ChatGPT and Gemini.

But, it warned in its December 2023 report, “[u]nlike their extra distinguished counterparts, … these LLMs steadily lack built-in safeguards, rendering them extra inclined to misuse within the creation of misinformation or hate speech.”

What concrete negative effects does GAI have on disinformation?

We are seeing a whole range of different disinformation created by GAI, from fully AI generated fake news websites to fake Joe Biden robocalls telling Democrats not to vote.

And with the technology developing so quickly, media systems are having trouble adapting to it, learning how to use it safely and preventing dangers, while researchers are scrambling to identify and analyze the impacts. From the user’s point of view, generative AI is causing a general loss of trust in the media and difficulties in verifying the truthfulness of content, especially around elections. Deep fakes can be used to create non-consensual explicit content using someone’s likeness, leading to severe privacy violations and harm to individuals, particularly women and marginalized communities.

Problem 1: Volume, automation and amplification

With GAI, the volume of disinformation potentially becomes infinite rendering fact checking an insufficient tool. As the marginal costs of the production of disinformation fall towards zero, the costs of dissemination are also nearly zero thanks to social media.

On top of this, individuals can now use user-friendly apps to easily and quickly generate sophisticated and convincing GAI content such as deep fake videos and voice clones – content that previously needed entire teams of tech-savvy individuals to produce. This democratization of deep fake technology lowers the barrier of entry for creating and disseminating false narratives and misleading content online.

Malign actors can easily leverage chatbots to spread falsehood across the internet at record speed, regardless of the language. Text-to-text chatbots, such as ChatGPT or Gemini, or image generators, such as Midjourney, DALL-E or Stable Diffusion, can be used to create massive amounts of text as well as highly realistic fake audio, images and videos to spread misinformation and disinformation. This can lead to false narratives, country-specific misinformation, manipulation of public opinion and even harm to individuals or organizations.

In a 2023 study, researchers at the University of Zurich in Switzerland found that generative AI can produce accurate information that is easier to understand, but it can also produce more compelling disinformation. Participants also failed to distinguish between posts on X, formerly Twitter, written by GPT-3 and written by real people.

GAI applications can be combined to automate the whole process of content production, distribution and amplification. Fully synthetic visual material can be produced from a text prompt, and websites can be programmed automatically.

Problem 2: Disinformation and the public arena’s structural transformation

Digitization has been transforming the public sphere for some time now. Generative AI is yet another element fueling this transformation, but it shouldn’t be viewed in isolation, with structural shifts mainly happening because of digital media, economic pressures on traditional media organizations and the reconfiguration of attention allocation and information flows. The increase in the volume of AI-generated content, coupled with the difficulty in recognizing that content is AI-generated, is an additional factor in the public sphere’s transformation.

Information pollution has more than one cause apart from deliberately generated disinformation. Emily M. Bender, a linguistics professor at the University of Washington, addressed this problem in testimony before the US House Committee on Science, Space and Technology.

Issues

  1. Some reputable media houses are quietly posting synthetic text as if it were real reporting (venerable tech outlet CNET was one of them, although it says it has paused this for now after an outcry). But the content can be biased or inaccurate if algorithms aren’t designed properly, or if the training data sets are inherently biased.
  2. GAI can hallucinate. That means, it can produce content that isn’t based on existing data or examples provided during the training process but rather made up. In one infamous example, in its very first demonstration, Google’s Bard chatbot (as Gemini was called at the time) claimed that the James Webb Space Telescope had captured the first images of a planet outside our solar system, which wasn’t factually true.
  3. GAI has turbocharged plagiarism. NewsGuard from the Journalism Trust Initiative was thefirst to identify the emergence of AI content farms using AI to copy and rewrite content from mainstream sources without credit. NewsGuard has identified hundreds of additional unreliable AI-generated websites.
  4. Trust in democratic processes and institutions is eroding. The more polluted our information ecosystem becomes with synthetic text, the harder it will be to find trustworthy sources of information, and the harder it will be to trust them when we’ve found them. UN Secretary General Antonio Guterres sees this as an “existential danger to humanity.”

Problem 3: Authoritarian regimes benefit

ChatGPT reproduces harmful narratives propagated by authoritarian regimes when given hypothetical prompts, finds research by Democracy Reporting International. In one case study, researchers were able to prompt ChatGPT to emulate a reporter from Russia Today, a state-controlled news organization. By doing this, they were able to get ChatGPT to circumvent its safeguards and produce problematic outputs such as advocating for the “must de-nazify Ukraine,” which is a common Russian narrative used to justify their 2022 invasion of Ukraine. The research demonstrated the relative ease with which AI chatbots can be co-opted by malicious actors to produce misleading or false information regardless of the language used.

As such, generative AI models developed in authoritarian countries — with possible state involvement — have implications that extend beyond the confines of these states. “The world’s most technically superior authoritarian governments have responded to improvements in AI chatbot know-how, making an attempt to make sure that the functions adjust to or strengthen their censorship methods. Legal frameworks in not less than 21 nations mandate or incentivize digital platforms to deploy machine studying to take away disfavored political, social, and non secular speech,” Democracy Reporting International finds. “With user-friendly on-line instruments powered by these fashions, they’re turning into more and more accessible globally. This ensures that the biases and propaganda originating from these fashions’ house nations will proliferate far past their borders.”

Problem 4: GAI could negatively impact elections

Elections and generative AI have a special connection. This is because the actors involved in elections always pursue specific goals: to either win power for their allies or themselves or to influence a foreign country’s political landscape. GAI enables such actors to create “unreality,” and it’s becoming a weapon in information warfare and influence operations. Such campaigns are mostly coordinated, concerted, evaluated, measured and funded by political or foreign actors.

“These actors see info as a theater of warfare,” says Carl Miller, the founding father of the UK-based Centre for the Analysis of Social Media, in a current podcast.

There are fears generative AI could undermine democratic processes like elections

Research by the International Center for Journalists found election disinformation had common and cyclical patterns regardless of the country they examined. For example, the narrative that votes were cast in the name of deceased people or disinformation about what documents were needed to vote were found in a range of nations.

Generative AI is a perfect tool for creating such campaigns. In January 2024, the attorney general in the US state of New Hampshire said it was investigating an apparent robocall that used artificial intelligence to mimic US President Joe Biden’s voice and discourage people from voting during the state’s primary election.

Companies like OpenAI are rushing to develop safeguards to make sure GIA is not used in a way that could undermine the election process. 

This article is part of Tackling Disinformation: A Learning Guide produced by DW Akademie.

The Learning Guide contains explainers, movies and articles aimed toward serving to these already working within the area or immediately impacted by the problems, reminiscent of media professionals, civil society actors, DW Akademie companions and consultants.

It affords insights for evaluating media growth actions and rethinking approaches to disinformation, alongside sensible options and professional recommendation, with a deal with the Global South and Eastern Europe.