SUBSCRIBER+ EXCLUSIVE REPORTING — With the 2024 U.S. election season already the most volatile in recent history, election officials across the country are scrambling to prepare for a new threat that has quickly become a top concern: a potential tidal wave of artificial intelligence–fueled lies designed to confuse and agitate angry, scared and paranoid American voters.
AI has advanced rapidly in recent years, as ChatGPT and services that generate fake images, audio, and video constantly improve their capabilities. These advances have jolted the election community, which was already reeling from an increase in mis- and disinformation, conspiracy theories, and death threats largely tied to a surge in right-wing extremism in 2020.
Now, election officials are trying to predict what will happen when new AI tools collide with the uniquely fraught moment of the 2024 election, with President Joe Biden dropping out of the race and endorsing Vice President Kamala Harris against former President Donald Trump, in a contest that both parties describe as existential for the country’s future.
Given the state-run nature of U.S. elections — in which the federal government offers support but sets few rules — these AI preparations vary widely. Arizona has taken one of the most proactive approaches, organizing multiple eye-opening simulations of AI disruptions to prepare local election supervisors for what could be a rocky few months.
“We want them to be aware of the technology,” said Michael Moore, the chief information security officer for Arizona’s top election official, Secretary of State Adrian Fontes.
In interviews and written statements, officials from three key 2024 battleground states — Arizona, North Carolina, and Pennsylvania — described how they’re guarding against AI-powered election interference, from educating voters to wargaming crises.
“We're [up] against unprecedented challenges,” Moore told The Cipher Brief. “You can literally damage society if you're not calling out things that are false that are being spread rapidly.”
The shooting at a July 13 Trump rally that injured the former president was just the latest example of a high-profile crisis that offers fertile ground for AI-fueled misinformation. As The Cipher Brief reported last week, a flood of false narratives — chief among them that the Biden administration had planned the attack and that the Trump campaign had staged it — reached millions of people within hours of the shooting.
“When information is scarce and emotions are high, many actors capitalize on fear and uncertainty to sow division and distrust,” said Matthew Weil, executive director of the Bipartisan Policy Center’s Democracy Program. “AI could supercharge that vulnerability.”
Sign up for the Cyber Initiatives Group newsletter. Better results in cyber require better thinking. Sign up for the CIG newsletter today.
Playing “Whac-A-Mole” against the threats
Election officials have spent years trying to confront mis- and disinformation about how voting works, and they expect AI to turbocharge that stream of new falsehoods in this election season.
“The kinds of threats that we've seen previously — it really ramps them up,” Moore said.
These threats include more easily generated social media bot armies and fake news websites that amplify divisive messages and deep fakes — fraudulent audio or video files purporting to depict things that didn’t happen — that could change people’s behavior.
In addition, as more and more online services experiment with AI, false information can surface in trustworthy places, including Google search results. “We don’t want someone to be disenfranchised or not show up to cast their ballot because of inaccurate information that was generated from a bot,” said Karen Brinson Bell, executive director of the North Carolina State Board of Elections.
With these possibilities brewing, election officials worry about AI nightmare scenarios in which artificially-generated media sparks mass confusion, depresses turnout, or even causes riots at polling places — an outcome that many consider plausible, given the Jan. 6, 2021, insurrection at the U.S. Capitol.
Fractured partnerships between election officials, federal agencies, and technology companies exacerbate all of these AI risks.
The Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) used to contact social media companies when election officials raised concerns about online misinformation, which often prompted the companies to take down material. But after a fierce right-wing backlash against such coordination — in which leading Republican politicians accused CISA of masterminding a government censorship regime against conservatives — CISA ended that work and reduced its collaboration with social media firms.
Even before that content-moderation debacle, state officials complained that their federal partners had failed to notify them about emerging misinformation trends, such as viral videos falsely purporting to show poll-worker misconduct. “How can we more timely be made aware of that information,” Brinson Bell said, “so we can…get the correct, accurate information out to the public so that they aren't misled?”
Election officials are confronting the AI challenges with limited resources, overstretched personnel, and a growing workload, including implementing new election laws and tightening physical security amid increasing death threats.
“All of us are just trying to figure out how to prioritize this in the midst of everything else,” Brinson Bell said, comparing the situation to a game of Whac-A-Mole.
When the Covid-19 pandemic struck during the 2020 election, “we had to learn to be public health officials,” Brinson Bell said. Now, she added, AI could be considered “our pandemic of 2024.”
AI election war games
To reduce the likelihood of AI chaos, state leaders are focusing on educating election officials who communicate with voters. Some states have found that the best way to teach election workers about AI is to hold exercises that simulate the effects of an AI-fueled misinformation crisis.
Arizona has held three such exercises, including one with journalists in May. “We were definitely surprising folks,” Moore said. “They didn't understand how readily accessible the capabilities of making this content was.”
At the May exercise, employees from the nonprofit education group CivAI “did a live demo and actually made content in front of the whole audience of people to show them how easy it is,” Moore said. Toshi Hoo, director of the Institute for the Future’s Emerging Media Lab, also created AI deep fakes of Fontes and several county election supervisors to show off the technology.
Fontes’ office plans to hold three exercises specifically for law enforcement officials in the northern, central, and southern regions of the state, Moore said.
North Carolina has hosted tabletop exercises to prepare election officials for various hazards, including severe weather, and the state is integrating misinformation into those trainings. Only a few counties participate, but the state distributes the insights broadly.
As for Brinson Bell’s own AI education, she cited a “very enlightening” workshop in January at which election officials and tech experts collaborated to test AI models’ accuracy. When Brinson Bell asked five different AI chatbots about North Carolina’s election practices, several responded with misinformation. “That is really disconcerting,” she said. “It's probably what has resonated the most with me about the challenges that we could possibly face.”
These exercises are part of broader training programs that give local officials a basic understanding of this emerging threat vector.
Arizona holds regular interagency meetings about election security, as well as monthly meetings with election supervisors from the state’s 15 counties. North Carolina convenes election staffers from its 100 counties twice a month for “huddles” that have occasionally featured AI presentations. And at the Pennsylvania Department of State, which oversees elections across 67 counties, press staffers are “undergoing a series of AI-related trainings” that cover “the pros and cons of generative AI, its capabilities, and how to spot its use,” the department said in a statement.
The hope is that once election workers understand the basics of AI, they can warn voters about its potential misuse.
“We really need to try and get people as inoculated as possible,” Moore said, “and get them to understand, ‘People are going to try and trick you. They've got brand new tools to do that.' So just look out for it.”
Today’s constant barrage of information makes it easy for countries to wage disinformation campaigns, and your emotions are the weapon of choice. Learn how to recognize disinformation and protect democracy around the world in this short video. This is one link you can feel good about sharing.
Leaning on others
Despite all these efforts, individual states often lack the resources and expertise to track and combat AI threats. That’s where government and industry partnerships come in.
CISA, the lead federal agency working on election security, has participated in Arizona’s AI exercises, and the agency is “spreading what we have created locally" to other parts of the country, Moore said. In North Carolina, all counties receive threat information from CISA as members of the Elections Infrastructure Information Sharing & Analysis Center. And in Pennsylvania, an “Election Threats Task Force” is coordinating with CISA, the FBI, and relevant state authorities “to investigate potential AI-generated threats” and glean best practices from technical experts, the Pennsylvania spokesperson said.
“While generative AI will not introduce fundamentally new risks this year, it will intensify existing risks, and CISA is working to ensure state and local election [officials] don’t have to fight this battle alone,” Cait Conley, a CISA senior adviser overseeing the agency’s election security mission, said in a statement.
Partnerships with the tech industry will also prove crucial to combating AI threats.
North Carolina and the Bipartisan Policy Center are holding a tabletop exercise with tech companies this month to understand how AI could disrupt elections and how to prevent those disruptions. (BPC previously held such exercises in Georgia, Pennsylvania, and Michigan, and it plans to hold another one in Ohio this month.)
“The connectivity has weakened between the election officials responsible for administering elections and those entities that host the public conversation around the issues,” said Matthew Weil, the executive director of BPC’s Democracy Program.
Brinson Bell said she and her fellow election administrators “need a commitment from those companies to recognize [that] there can be both positive and negative influences on elections.”
Arizona is interested in forming deeper relationships with AI companies. OpenAI presented at one of Arizona’s tabletop exercises, and the state has worked with major vendors on projects that aren’t ready to be announced, but Moore said “I would like to talk to them more.”
Conversations between states are ongoing too. Brinson Bell co-chairs a working group of election officials that she said is “working on an AI product” to distribute to state leaders. And Moore has heard from counterparts across the country who are curious about Arizona’s AI exercises, especially the one that involved journalists. “I don’t think anyone's ever done that before,” he said.
Technological hurricane
Cybersecurity experts and election officials agree that it is difficult to predict how AI will shape the rest of the election season.
“The uncertainty is very hard to prepare for,” Brinson Bell said. She compared AI to North Carolina’s hurricane season forecast: “We're expecting more hurricanes, but you never know where they're going to land.”
Equally unclear is how effectively the election ecosystem is preparing for these potential disruptions. If AI influence campaigns pop up online, newly-wary social media firms may not rush to take them down. And while CISA has released guidance documents about generative AI’s potential impact on elections and about how clear communication can enhance election security, some election officials may not have time to read and apply those insights.
The challenges that election officials face in understanding AI and tracking its consequences reflect a broader struggle to raise societal awareness of these technological dangers.
“We’re behind the eight ball here,” Moore said. “We’re going to have to solve this very quickly, or society's going to get real weird real quick.”
Who’s Reading this? More than 500K of the most influential national security experts in the world. Need full access to what the Experts are reading?
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.