The incorporation of artificial intelligence (AI) in the form of deepfake technology is emerging as a pressing concern for election security. Recent occurrences, such as an AI-generated deepfake robocall impersonating President Biden, have prompted heightened apprehension regarding the potential threats posed by AI in the upcoming elections.
The widespread availability of generative AI (GenAI) platforms such as ChatGPT, Google’s Gemini, and other large language models (LLMs) on the Dark Web could be harnessed to disrupt the democratic process. These platforms have the capacity to be utilized in various manners, including mass influence campaigns, automated trolling, and the creation of deepfake content.
FBI Director Christopher Wray has articulated reservations regarding the use of deepfakes for information warfare, with state-backed actors seeking to influence geopolitical balances. Furthermore, GenAI has the capability to automate the establishment of networks that disseminate disinformation, thereby eroding public trust in the electoral process.
Padraic O’Reilly, chief innovation officer for CyberSaint, underscores the substantial risk connected with the swift evolution of AI technology. He expresses concern that the use of AI-generated content for microtargeting on social media platforms could result in individuals being exposed to highly persuasive, tailored messaging designed to influence their beliefs and votes.
The potential consequences of combining deepfake technology with social media are disconcerting. There exists a risk of heightened polarization among US citizens, potentially resulting in the creation of divergent “bespoke realities” where individuals subscribe to “alternative facts.”
The absence of quality assurance on social media platforms and the lack of regulation leaves the door wide open for malicious actors to exploit the technology for their own agendas. Consequently, it is crucial for election officials and campaigns to be cognizant of the risks associated with GenAI and take steps to defend against them.
James Turgal, vice president of cyber-risk at Optiv, underscores the necessity of maintaining vigilance in overseeing content and providing training to staff and volunteers. Training on AI-powered threats, including social engineering and deepfake video, is vital to ensure that individuals are equipped to respond to suspicious activity.
Ultimately, the implementation of regulation, including watermarking for audio and video deepfakes, will be indispensable in confronting the risks associated with AI. The Federal Communications Commission (FCC) has taken action to combat fraudulent activities related to AI-generated voice calls by declaring the use of voice cloning technology illegal.
The rapidly progressing nature of AI technology presents challenges in the formulation of effective regulations. As AI continues to advance, it is imperative to establish safeguards to mitigate the potential risks associated with its use.
In conclusion, the utilization of AI in the form of deepfake technology poses significant challenges for election security. It is essential for stakeholders to be proactive in addressing the potential threats posed by AI and to implement measures to safeguard the democratic process from malicious actors.
+ There are no comments
Add yours