Emerging Dangers of AI-Driven Political Persuasion in U.S. Elections
In early 2024, residents across New Hampshire received phone calls featuring a voice resembling President Joe Biden, urging Democrats to skip the primary election. While the message sounded genuine, it was entirely fabricated using artificial intelligence technology. This incident marked a new chapter in the evolution of AI-generated political misinformation.
From Simple Deepfakes to Sophisticated Persuasion Machines
What once seemed like a novelty-AI-generated voice impersonations-has rapidly advanced into a complex ecosystem capable of producing highly realistic synthetic videos and audio within minutes. Platforms like OpenAI’s Sora now enable the creation of convincing fake content involving politicians, celebrities, and even fabricated news segments. The concern that elections could be flooded with such deceptive media has become a mainstream issue, reflecting the growing sophistication of AI tools.
However, the real threat extends beyond mere imitation. Recent peer-reviewed research reveals that AI chatbots can actively influence voter opinions, shifting political attitudes more effectively than traditional campaign advertisements. This transition from passive mimicry to active persuasion represents a profound challenge for democratic processes.
How AI Personalizes and Amplifies Political Influence
Modern AI systems do more than replicate voices or faces-they engage in dynamic conversations, interpret emotional cues, and adjust their messaging style to maximize persuasive impact. These AI agents can coordinate with other specialized models to generate tailored images, videos, and audio content, creating a seamless and highly targeted influence operation without human intervention.
Previously, orchestrating large-scale online influence campaigns required extensive human resources managing fake accounts and meme factories. Today, these efforts can be fully automated, cost-effective, and nearly invisible. The same AI technologies powering customer support and educational tools can be repurposed to subtly sway political opinions or reinforce government narratives. This influence can infiltrate everyday platforms such as social media, language learning apps, dating services, and voice assistants-potentially embedding political persuasion into routine digital interactions.
The Economics of AI-Powered Voter Targeting
Remarkably, the financial barrier to deploying AI-driven personalized political messaging is surprisingly low. For under $1 million, it is feasible to generate customized, conversational content for every registered voter in the United States. Assuming an average of 10 brief exchanges per voter, the cost per interaction remains minimal, making it economically viable to target key swing voters who decided pivotal elections like the 2016 race for just a few thousand dollars.
Given the scale and global attention on U.S. elections, the stakes are exceptionally high. Without proactive measures, the 2026 midterms or the 2028 presidential election could be decisively influenced by whoever masters AI-powered persuasion first.
Understanding the 2028 Election Landscape
While some experts have downplayed AI’s electoral risks, emerging evidence suggests a rapidly shifting reality. Studies demonstrate that advanced models like GPT-4 can outperform human communication experts in crafting persuasive messages on divisive political issues. In controlled experiments across the U.S., Canada, Poland, and the U.K., brief chatbot interactions swayed voter attitudes by up to 10 percentage points-significantly exceeding the impact of traditional political ads from recent election cycles. When AI models were fine-tuned specifically for persuasion, opinion shifts reached an astonishing 25 percentage points.
Access to powerful AI tools is no longer limited to large corporations. Open-source language models, which can be downloaded and customized by anyone with internet access, are closing the performance gap with commercial systems. This democratization of AI technology enables a wide range of actors-from well-funded organizations to grassroots groups-to deploy scalable, politically persuasive AI campaigns.
Internationally, AI-driven influence operations are already underway. In India’s 2024 general election, millions of dollars were reportedly invested in AI tools to segment voters, identify persuadable individuals, and deliver personalized messages via robocalls and chatbots. Similarly, Taiwan has uncovered China-linked campaigns using generative AI to produce disinformation, including deepfakes and biased language model outputs aligned with the Chinese Communist Party’s narratives.
It is only a matter of time before such tactics become prevalent in U.S. elections. Foreign adversaries like China, Russia, and Iran maintain extensive networks of troll farms and covert influence agents, which can be supercharged by AI-generated localized political content. With minimal tuning, AI can impersonate community leaders or activists without any human presence on the ground. Political campaigns themselves are poised to adopt these technologies, leveraging AI to generate and test hundreds of tailored messages in real time, optimizing voter persuasion at unprecedented scale and speed.
Policy Gaps and Regulatory Challenges
Despite the mounting risks, U.S. policymakers have largely failed to address the broader threat of AI-driven political persuasion. Legislative efforts have focused narrowly on deepfake content, neglecting the wider ecosystem of AI-enabled influence operations.
In contrast, the European Union’s 2024 AI Act classifies election-related AI persuasion as a “high-risk” application, imposing stringent requirements on systems designed to influence voting behavior. Administrative AI tools used for campaign logistics are exempt, but those aimed at shaping political beliefs face strict oversight.
The United States, however, lacks comprehensive regulations defining political influence operations or establishing standards for AI-generated political content. Existing measures-such as updates to Federal Election Commission fraud provisions, limited Federal Communications Commission disclosure rules for broadcast ads, and state-level deepfake laws-are fragmented and insufficient to cover the vast digital landscape of political messaging.
Responsibility for detecting covert AI-driven campaigns currently falls on private technology companies, each with varying policies and enforcement capabilities. Platforms like Meta and Google require disclosure for AI-generated political ads, while others like X (formerly Twitter) have been less proactive, and TikTok bans paid political advertising altogether. Yet, these rules apply only to paid, public content, leaving private and organic AI persuasion campaigns largely unregulated and invisible.
Some companies have voluntarily published reports exposing inauthentic accounts and influence operations, but these efforts lack independent oversight and do not prevent determined actors from exploiting open-source AI models and off-platform infrastructure.
Building a Robust Defense Against AI-Driven Influence
Banning AI from political discourse is neither feasible nor desirable. AI can enhance democratic engagement-for example, candidate chatbots can clarify policy positions, answer voter questions, and simplify complex issues. Research also indicates AI’s potential to reduce belief in conspiracy theories.
Nonetheless, the U.S. must implement strategic measures to mitigate AI persuasion risks:
- Screen Foreign AI Technologies: Rigorous evaluation of AI products, especially those originating from geopolitical rivals like China, Russia, and Iran, is essential. This includes assessing apps, platforms, and games for embedded political messaging or biased recommendation algorithms before widespread adoption. Coordination among intelligence agencies, regulators, and tech platforms is critical to identify and manage these threats.
- Establish Clear AI Governance: The U.S. should lead in defining technical standards and regulatory frameworks governing AI systems capable of generating political content. This includes restricting access to high-performance computing resources that enable large-scale foreign persuasion campaigns and developing disclosure requirements that balance transparency with First Amendment protections.
- Develop International Norms and Enforcement: Foreign adversaries will attempt to circumvent safeguards using offshore servers, open-source models, and third-party intermediaries. Multilateral agreements should establish norms condemning AI-enabled election interference, backed by coordinated sanctions and public accountability. Shared monitoring infrastructure and aligned provenance standards will enhance detection and response capabilities.
Addressing AI-driven political persuasion requires global cooperation. Influence campaigns transcend borders, and open-source AI tools will persist. The objective is to increase the cost and reduce the window of opportunity for malicious actors to manipulate democratic processes undetected.
Preparing for the Next Era of Political Influence
The dawn of AI-powered persuasion is imminent, and adversaries are already poised to exploit it. Meanwhile, U.S. legal frameworks remain outdated, regulatory guardrails are insufficient, and oversight is largely voluntary. If the past decade was defined by viral misinformation and doctored media, the coming years will be shaped by subtle, personalized messages crafted to sway opinions with uncanny precision.
For nations like China, Russia, and Iran, exploiting America’s open information environment is a strategic priority. The United States must respond with a comprehensive strategy that recognizes AI persuasion as an immediate reality-not a distant threat. This entails rigorous risk assessment, robust standards, and a resilient technical and legal infrastructure to safeguard democratic discourse. Delaying action until manipulation becomes visible will mean losing critical ground in protecting electoral integrity.
