SAN FRANCISCO — OpenAI, the maker of ChatGBT, said Thursday it has accused groups in Russia, China, Iran and Israel of trying to influence political discourse around the world using its technology, highlighting concerns that the creation of artificial intelligence is facilitating covert propaganda for state actors. Campaigns as the 2024 presidential election approaches.

OpenAI deleted accounts associated with well-known propaganda activities in Russia, China, and Iran; Israeli Political Campaign Agency; A previously unknown group that appeared in Russia, researchers at the institute named “bad grammar”. Teams use OpenAI’s technology to write posts, translate them into different languages, and develop software to automatically post to social media.

None of these groups managed to gain much traction; Their associated social media accounts reached some users A few followers, said Ben Nimmo, principal analyst at OpenAI’s Intelligence and Investigations Group. However, OpenAI’s report shows that campaigners who have been active on social media for years are using AI technology to boost their campaigns.

“We’ve seen them produce text with greater volume and fewer errors than these operations have traditionally managed,” Nimmo, who previously worked on meta-tracking influencer operations, said in a briefing with reporters. Nimmo said other groups may use OpenAI’s tools without the company’s knowledge.

“This is no time for complacency. “History shows that influencer activities that have gone nowhere for years can suddenly explode if no one is looking for them,” he said.

Governments, political parties and activist groups have used social media to influence politics for years. After concerns about Russian influence in the 2016 presidential election, social media platforms began to take a closer look at how their platforms were being used to sway voters. Companies generally prevent governments and political groups from covering up concerted efforts to influence users, and require political ads to disclose who paid for them.

See also  Academy 'conducts a review' after Andrea Riseborough's shock Oscar nomination

Disinformation researchers have raised concerns that as AI tools that can generate realistic text, images, and video become more widely available, it will become harder to detect and respond to misinformation or covert influence activities online. Millions of people are voting in elections around the world this year, and AI deepfakes have already proliferated.

OpenAI, Google and other AI companies are working on technology to identify deepfakes with their own tools, but such technology is not yet proven. Some AI experts think deepfake detectors aren’t entirely effective.

Earlier this year, a group affiliated with the Chinese Communist Party allegedly endorsed AI-generated audio of one candidate running in a Taiwanese election over another. However, the politician, Foxconn owner Terry Coe, did not support the other politician.

In January, voters in the New Hampshire primaries received a robocall that purported to be from President Biden, but was soon identified as AI. Last week, a Democrat who said he operated a robocall was indicted on charges of voter suppression and candidate impersonation.

OpenAI’s report detailed how five groups used the company’s technology in their efforts to influence activities. Spamouflage, a previously known group from China, used OpenAI’s technology to research activity on social media and write posts in Chinese, Korean, Japanese and English, the company said. An Iranian group called the International Union of Virtual Media also used OpenAI’s technology to create articles it published on its site.

A previously unknown group, Bad Grammar, used OpenAI technology to create a program that can automatically post to the messaging app Telegram. Bad Grammar then used OpenAI technology to generate posts and According to the report, it argues in Russian and English that the US should not support Ukraine.

See also  Wisconsin State Supreme Court Primary Election 2023

The report found that an Israeli political campaign firm called Stoic used OpenAI to create pro-Israel posts about the Gaza war and targeted them at people in Canada, the United States and Israel, OpenAI said. On Wednesday, Facebook owner Meta also touted Stoic’s work, saying it had taken down 510 Facebook and 32 Instagram accounts used by the group. The company told reporters that some accounts were hacked, while others were accounts of fictitious people.

The accounts in question often comment on the pages of well-known individuals or media outlets that show pro-Israel American college students, African Americans and others. The comments supported the Israeli military and warned Canadians that “radical Islam” threatened liberal values ​​there, Meta said.

The AI ​​acted on the wording of some comments, which struck real Facebook users as odd and out of context. Performance was poor, the company said, attracting only about 2,600 legitimate followers.

META operates after the Atlantic Council’s Digital Forensics Research Lab was discovered Network while pursuing similar activities identified by other researchers and publications.

Last year, disinformation researchers suggested that AI chatbots could have long, detailed conversations with specific people online, trying to sway them in a particular direction. AI tools can ingest large amounts of data about individuals and send messages directly to them.

OpenAI hasn’t identified any cutting-edge applications of AI, Nimmo said. “It’s more of an evolution than a revolution,” he said. “There’s nothing to say we won’t see that in the future.”

Joseph Menn contributed to this report.

Leave a Reply

Your email address will not be published. Required fields are marked *