Connect with us

World

OpenAI Disrupts Disinformation Campaigns From Russia, China Using Its Systems To Influence Public Opinion

Published

on

OpenAI Disrupts Disinformation Campaigns From Russia, China Using Its Systems To Influence Public Opinion

Topline

OpenAI identified and disrupted five influence operations involving users from Russia, China, Iran and Israel that were using its AI technology, which includes ChatGPT, to generate content aimed at deceptively influencing public opinion and political discourse.

Key Facts

Those behind the operations used AI to generate content shared across a variety of platforms, including code for bots and to create the false appearance of engagement by generating disingenuous replies on social media posts, the company said.

Two operations originated from Russia—one involving the well-established Doppelganger campaign, which used AI to generate false content and comments, and the other involving a “previously unreported” operation the company calls Bad Grammar, which OpenAI said used its models to code a bot that posted short, political comments on Telegram.

OpenAI said the Chinese operation known as Spamouflage, notorious for its influence efforts across Facebook and Instagram, used its models to research social media activity and generate text-based content in multiple languages and across multiple platforms.

OpenAI linked one campaign to an Israeli political campaign firm called Stoic, which used AI to generate posts about Gaza across Instagram, Facebook and X, formerly known as Twitter, targeting audiences in Canada, the U.S. and Israel.

The operations also generated content on Russia’s invasion of Ukraine, Indian elections, western politics and criticisms of the Chinese government, OpenAI said in its first-ever report on the subject.

Contra

OpenAI concluded AI did little to enhance the reach of these influence operations. Citing the Brookings Institutions’ “Breakout Scale,” which measures the impact of influence operations, OpenAI said none of the identified operations scored higher than a two out of five—meaning the generated content never broke out among authentic communities of users.

Key Background

Influence operations across social media platforms have captured the attention of the tech world since 2016, when Russian-sponsored entities were identified trying to sow discord ahead of the presidential election in favor of then-candidate Donald Trump. Since then, tech companies have been trying to monitor similar activities, often summarizing their efforts in regular reports. Last month, Microsoft found evidence that a new Russian-backed election influence campaign was underway ahead of the 2024 election.

Tangent

Meta also released a report this week linking influence operation efforts to Stoic, claiming it removed hundreds of fake Facebook and Instagram accounts related to Stoic. The Meta report also identified AI as a source of some of the content posted by those accounts.

Continue Reading