Emerging Concerns: State Actors Use AI for Covert Propaganda Campaigns

The tech company OpenAI, the developer behind ChatGPT, has disclosed that a number of groups from Russia, China, Iran, and Israel have been utilizing their technology to exert influence on political discussions worldwide. This revelation has raised concerns about the potential exploitation of generative AI for secretive propaganda as the 2024 presidential election looms.

OpenAI removed accounts associated with established propaganda operations in Russia, China, and Iran, as well as a political campaign firm from Israel, and an unidentified Russian group referred to as “Bad Grammar”. These entities used OpenAI’s technology to create posts, translate them into multiple languages, and develop software for automated posting on social media platforms.

Despite the limited impact of these groups, with their social media accounts engaging only a small number of users and attracting very few followers, Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, emphasized their ability to produce text at a high volume and low error rate compared to traditional methods. He also warned about the potential existence of other groups utilizing OpenAI’s tools unbeknownst to the company.

Nimmo emphasized the necessity of remaining vigilant, noting that influence operations that initially struggled to gain attention could suddenly become successful if left undetected.

The utilization of social media for political influence is not a new phenomenon, with governments, political parties, and activist groups having long been involved in such activities. However, concerns have escalated with the development of AI tools capable of generating realistic text, images, and videos, raising fears about the proliferation of false information and covert influence operations online.

As worries mount over the potential impact of generative AI technology on elections, efforts are being made to develop technology capable of identifying AI-generated content. Nonetheless, experts remain doubtful about the complete effectiveness of these measures.

OpenAI’s report highlighted how the identified groups harnessed the company’s technology to execute their influence operations, including those targeted at social media platforms and political events in different countries. One instance involved the use of AI-generated audio in an attempt to sway the outcome of elections in Taiwan. The report also revealed the use of OpenAI’s technology in generating posts aimed at influencing public opinion on various geopolitical issues.

In light of these revelations, it is evident that the misuse of AI for covert propaganda has the potential to lead to significant consequences, emphasizing the urgent need for robust measures to address this evolving challenge.

This report underscores the imperative need for increased vigilance and accountability to tackle the growing misuse of AI for covert propaganda, particularly as the 2024 presidential election approaches. Collaborative efforts between technology companies, government agencies, and international bodies will be essential in combating the evolving threats posed by the intersection of AI and geopolitical influence.