SUBSCRIBER+ EXCLUSIVE REPORTING — With artificial intelligence getting smarter every day, cybersecurity experts, election officials and voters have been fretting about the possibility that malicious actors — at home or abroad — might use these automated tools to plunge the 2024 U.S. election into chaos.
Intelligence officials recently warned lawmakers that Russia and China are using AI to sow division in the U.S. Nearly half of Americans believe that AI-generated content will interfere with this year’s election process. And as residents of more than 50 countries head to the polls this year, elections across Europe and Asia have already been rocked by AI.
There are multiple scenarios that worry election security experts, including artificially generated voice or video messages, fake documents sprinkled into real leaks, and massive AI-powered bot armies encouraging Americans to distrust their institutions.
“What AI does is, it just makes it easier to scale,” said Kim Wyman, a former Washington secretary of state and federal election security adviser. “Ultimately, this is all stacked in the bad actors’ favor.”
While these forms of election meddling aren’t entirely new, political and security experts fear that AI could make them more potent. But not every threat is equally likely or equally serious. Election security experts who spoke to The Cipher Brief described very different prospects for three of the most commonly cited threats.
High likelihood, high impact: Online propaganda armies
The use of AI for election inference that most worries security experts involves a tried-and-true method of meddling: setting up fake social media accounts and using them to spread divisive messages.
It’s happened before. In 2016, the Russian government created an army of phony American personas that pushed incendiary arguments about race and other controversial topics online. Russia followed a similar playbook in 2018, 2020 and 2022, although coordination between U.S. government agencies and tech companies to take down the Kremlin’s accounts blunted their impact.
Today, Russia and other foreign powers remain intent on sowing chaos in U.S. elections, and officials believe they may use artificial intelligence to amplify their impact. According to a U.S. intelligence community assessment issued earlier this month, Russian operatives might tap AI “to improve their capabilities and reach into Western audiences.”
Using AI, bad actors can quickly create hundreds of fake social media accounts. AI can also furnish these accounts with realistic-looking profile photos and halfway-decent websites to boost their legitimacy, something that experts have seen in action since at least 2019. The upgrades since then mean that AI text-generation services like ChatGPT can now produce more natural-sounding English-language posts than most foreign operatives could write on their own. Russia hired hundreds of workers for its “troll factory” in 2016; today, it would only need a handful of operatives using AI to churn out the same volume of inauthentic accounts and divisive content.
“There is absolutely an opportunity here for foreign adversaries to get themselves further into the mix,” David Levine, a senior election integrity fellow at the German Marshall Fund’s Alliance for Securing Democracy, told The Cipher Brief. (The fact that there are fellowships for “election integrity” suggests the gravity of the problem).
China and Iran stand to gain the most from AI-fueled social media stunts, Levine said, because the technology can help them match Russia’s painstakingly developed prowess for online influence campaigns.
Influence operations represent adversaries’ best chance to use AI for election interference, because social media companies have scaled back their efforts to neutralize such interference, while a federal court case case and right-wing backlash have effectively destroyed efforts by the FBI and DHS’s Cybersecurity and Infrastructure Security Agency (CISA) to help the companies with this work. Government agencies say they have only flagged foreign misinformation for social media companies and haven’t pressured companies to take down content, but prominent conservatives have framed the partnership as a form of state censorship, and this message has resonated in the courts and on Capitol Hill. The FBI is only now tentatively resuming some of these conversations.
AI firms weren’t caught up in that firestorm, which focused on social media content moderation, but they may shy away from similar content moderation policies after seeing how it has burned the social media companies.
Bottom line: these are scenarios that officials believe are both highly likely — and quite dangerous.
Moderate-to-high likelihood, low impact: Phony media
The threat of AI-generated media tricking voters and stirring up controversy is no longer theoretical.
In January, some New Hampshire voters received what many took to be a robocall from President Joe Biden instructing them not to vote in their state’s presidential primary. But the voice wasn’t Biden’s; it was an automated impersonation of the president, a creation of AI. Security experts think that robocall was only the beginning of a wave of synthetic mimicries — or “deep fakes” — that will target voters.
“Expect to see a lot more of that,” said Lee Foster, the co-founder of AI security firm Aspect Labs and former director of information operations intelligence analysis for the security firm Mandiant.
These deep fakes — such as a doctored clip of a politician supposedly saying something offensive, or an announcement by an imposter election official that rampant fraud has marred the voting results — would offer fertile ground for disinformation for several reasons.
First, there is ample source material for bad actors to train their AI models; real video and audio of these public figures is widely available online.
“I see the barrier to entry as being quite low in these cases,” said Jessica Ji, a research analyst on the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology. “You don't need very much data at all to create a reasonably convincing audio deep fake.”
Second, most people who know their election supervisors or trust their political candidates are likely to trust what they see or hear them say — and trust is key to any successful deep fake. When Wyman spent 12 years as a county auditor, “people in my county knew me,” she said. “I had been on television and on radio quite a bit. So an audio call to their home from me would be easy to believe.”
But just because these deep fakes are likely to proliferate doesn’t mean they’re likely to change election results.
For one thing, there may be few undecided voters for deep fakes to persuade — though that of course may change as the campaign unfolds. Meanwhile, the more high-profile a deep fake is, the more journalists who will see it and debunk it in a way that may satisfy those remaining open-minded voters. It might be possible to target a deep fake at a crucial swing county, but the smaller the population of intended targets, the fewer opportunities there are for success.
And as good as AI-generated audio has gotten, it’s still imperfect, said Josh Goldstein, a CyberAI Project research fellow, and “there is no guarantee that bad actors will use the best tools available.”
Bottom line: Deep-fake technology exists, and it will likely be used, but there are limits to how much damage it can do.
Low likelihood, moderate impact: Fake documents
In addition to generating audio and video, AI can also create fraudulent documents that closely resemble the real thing. Adversaries seeking to interfere in elections could take advantage of this capability by hacking a political campaign, leaking its files, and peppering the leaks with fake AI-generated documents full of inflammatory material.
By training an AI model on real campaign documents — whether stolen in a hack or found online — bad actors could create authentic-looking counterfeits containing racist language, private admissions of candidate weaknesses, or dire financial projections. It’s easy to imagine any of those scenarios causing serious trouble for a campaign. AI might even be able to study and mimic the writing styles of specific high-profile political consultants or campaign staffers.
“The ability to forge any of those documents or any of those emails becomes a concern,” Mick Baccio, a global security adviser for the data-analysis firm Splunk’s research team, told The Cipher Brief. Baccio pored over the Hillary Clinton campaign’s hacked documents in 2016 as a threat intelligence analyst at the White House.
But in addition to the same issues that may undermine deep fakes’ effectiveness, there’s another important reason why a leak of fake campaign documents isn’t a major election security concern: the trade-off between quality and quantity.
A large batch of AI-generated counterfeits would almost certainly contain glaring mistakes that would require a manual review to fix. “Scaling up the fake documents might create unnecessary risk of exposure,” Goldstein said, “if each of the fake documents carries some risk of being exposed.” The attacker might instead decide to only create a handful of fake documents, which would make the review process easier. But the fewer the fake documents in the leak, the less likely any one of them would be to end up in front of a persuadable voter.
Bottom line: This scenario would require a lot of work for a limited reward, which is why experts don’t see it as a major concern.
Planning for a new spin on old threats
Officials at CISA won’t share specifics as to how they are planning for the above scenarios, but experts say that election officials and tech companies should prepare for AI-powered interference in the same way they guard against other threats: by recognizing that they won’t be able to prevent every attack and planning for how to respond when something bad happens.
“What we're seeing across the country…is a lot of preparation,” including “trainings like tabletop exercises where they go through threat scenarios,” said Wyman, now a senior fellow at the Bipartisan Policy Center’s Elections Project.
Those exercises often involve rapid responses to emerging misinformation, a tactic that experts say is crucial to countering online lies.
“The way we can fight this is not trying to detect every attack, every threat, but instead finding a way to push out the most reliable, the most transparent, the most accurate information about our election and thus satisfying the demand for quality information,” Doowan Lee, the chief information officer at the Trust in Media Cooperative, said during an event this month hosted by The Cipher Brief.
CISA has led the federal government’s effort to help prepare state and local election officials for AI risks. Cait Conley, a CISA senior adviser who oversees the agency’s election security work, said in a statement that “while the proliferation of generative AI capabilities enhances some of these preexisting risks — such as by making malicious cyber tools more easily available and enhancing foreign influence operations and disinformation campaigns — the foundational principles of security remain the same.”
Many election security experts want AI companies to ramp up efforts to prevent the abuse of their technologies. Leading developers recently committed to protecting elections from AI deep fakes, but Foster said they could be doing much more, including creating threat intelligence teams that watch for suspicious and politically charged activity and “catch that at the beginning of that pipeline.”
That vigilance may be especially important, given that AI is likely to grow even more powerful before this election season is over.
“It's a long time till November,” Foster said, “in terms of where development of AI platforms and tools may go.”
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief.