Skip to content
Search

Latest Stories

The Honors Awards
cipherbrief

Welcome! Log in to stay connected and make the most of your experience.

Input clean

When Deepfakes Become Doctrine

OPINION — Since U.S. and Israeli strikes began against Iranian military and nuclear infrastructure in late February, two wars have been running simultaneously. One is kinetic. The other involves something the world has not fully reckoned with: the systematic use of artificial intelligence to manufacture reality, at scale, in real time, during active armed conflict.

Within days of the opening strikes, AI-generated video of missile impacts on the USS Abraham Lincoln was spreading across TikTok. Fabricated footage of downed U.S. fighter jets circulated on Facebook and Instagram. Tehran Times published what appeared to be satellite imagery of a U.S. radar base in Qatar showing structural damage from the strikes. BBC Verify confirmed the image was AI-generated, built from genuine satellite data of a different location and manipulated using Google AI tools. None of it was real. All of it spread.


The social media intelligence firm Cyabra documented more than 145 million views of Iranian-linked disinformation content in under two weeks. The New York Times identified over 110 unique deepfakes promoting pro-Iran narratives in the same window. These are not the crude influence operations of a decade ago. They are the product of an adversary that has been building this capability methodically and has now deployed it at wartime scale.

Understanding why this matters requires a short detour through what Iranian propaganda actually used to look like.

During the Iran-Iraq War, Tehran’s media strategy relied on radio broadcasts and print. Its efforts to persuade Iraqi Shia populations to shift allegiances were largely unsuccessful. Limited reach, poor targeting, no feedback loop. During the 1991 Gulf War, Iraq’s disinformation was described by scholars as extreme exaggerations easily ridiculed in the Western press. Baghdad claimed it had shot down dozens of allied aircraft. The press verified it had not. That was the cycle.

The digital era brought sock puppets and recycled footage. These operations required significant human labor and were detectable with basic verification tools. An account posting video from the 2015 Syrian conflict while presenting it as something current could be caught by reverse image search in minutes. The barrier to debunking was low.

December 2023 marked the first real break. Iran’s IRGC-linked group Cotton Sandstorm hijacked streaming services in the UAE, UK, and Canada and broadcast a deepfake newscast. An AI-generated anchor delivered Tehran’s narrative on the Gaza conflict to viewers who believed they were watching legitimate news. Microsoft, analyzing the operation afterward, called it the “first Iranian influence operation where AI played a key component” and a “fast and significant expansion” of Iranian capabilities.

June 2025 accelerated the model. The European Digital Media Observatory documented the 12-day Israel-Iran conflict as “The First AI War,” the first time in a major conflict that more misinformation was created through generative AI than through traditional methods. The three most-viewed fake videos collectively amassed over 100 million views.

March 2026 builds on that precedent, at significantly greater scale, with meaningful tactical innovations added.

The first is coordinated architecture. Cyabra’s forensic analysis found tens of thousands of inauthentic accounts distributing identical AI-generated assets simultaneously across every major platform, with synchronized posting windows and coordinated hashtag clusters pointing to centralized production. And it became clear that a notable percentage of accounts amplifying the campaign were inauthentic. The content was not organic. It was engineered.

The second is what journalist Craig Silverman has called “forensic cosplay”: the fabrication of technical-looking verification tools designed to discredit authentic evidence. In one documented case, fabricated heatmap visualizations were deployed to label photographs taken by credentialed photojournalists at a strike site in eastern Tehran as AI-generated. AI forensics experts who reviewed the heatmaps found them semantically incoherent. The thread nonetheless reached hundreds of thousands of views before corrections could follow. In a second case, a fake “Empirical Research and Forecasting Institute” published fabricated Error Level Analysis of a New York Times photograph, conducting the analysis on a screenshot of an Instagram post rather than the original image. That methodological error renders the output meaningless. The false conclusion still attracted over 600,000 views on X.

This is a different category of operation from making false things look real. It is making real things look false. The verification infrastructure itself becomes the target.

The third element is the amplification model. Iran does not operate alone. The Foundation for Defense of Democracies documented what it calls an “authoritarian media playbook” in which Russian bot networks launder Iranian content while Chinese state-aligned media echoes anti-U.S. narratives. No centralized coordination is required. Each actor pursues its own anti-Western objectives, and the compounding effect across the global information environment far exceeds what any single actor could achieve independently. In June 2025, Cyabra documented an Iranian bot network in the UK that had been spreading pro-Scottish independence and anti-Brexit content. It went completely silent for sixteen days following the military strikes on Iran, then returned with explicitly pro-Iran messaging. State-directed, clearly. Deniable, carefully.

What is most consequential here is not the volume of Iranian deepfakes. It is the underlying strategic logic of what they are designed to accomplish.

Traditional propaganda is built to persuade audiences toward specific false beliefs. Iranian AI operations in this conflict appear calibrated to achieve something more durable: the destruction of the shared evidentiary foundation that makes accountability possible at all. When any image can plausibly be AI-generated, when forensic tools can be fabricated, and when platforms cannot distinguish authentic from synthetic at scale, the machinery of verification collapses. You do not need to win arguments about what happened. You only need audiences to conclude that nothing can be known.

Law scholars Danielle Citron and Robert Chesney named this the “Liar’s Dividend” in 2018: as deepfake awareness grows, actors gain the ability to dismiss genuine evidence as fabricated. Empirical research published in the American Political Science Review in 2025 confirmed the hypothesis. False claims of misinformation do generate statistically significant increases in public support for political actors facing accountability. This was largely centered on text-based scandals at the time, and with the dramatic improvements in synthetic images and video since that time, one can speculate that a similar effect plays out today on our screens. Iran has operationalized this principle. By circulating enough obviously synthetic content to seed generalized skepticism, it creates cover for dismissing authentic documentation of what actually occurred.

That logic runs in two directions at the same time. Abroad, Iran deploys deepfakes to project military capability and deny accountability for strikes it conducts. At home, the same operation insulates the regime from documentation of its own conduct toward its citizens. Internet connectivity in Iran fell to approximately one percent of normal levels by early March, per NetBlocks. That near blackout creates an information vacuum. Deepfakes and fabricated forensic analysis fill that vacuum while simultaneously rendering authentic protest documentation dismissible as synthetic. The regime does not need to suppress every image from the January crackdown. It only needs to ensure that any image is plausibly deniable.

At the same time, detection has not kept pace. Danny Citrinowicz, a senior researcher at Tel Aviv University’s Institute for National Security Studies, stated this January: “There is no ability today to systematically identify AI-driven influence campaigns.” Meta’s Oversight Board formally ruled its deepfake detection “not robust or comprehensive enough” for the velocity of misinformation during armed conflicts. The EU AI Act’s labeling requirements for AI-generated content do not become enforceable until August 2026. This conflict began months before that.

The U.S. is in the middle of restructuring how it organizes the counter-influence mission. The debate over the appropriate scope of that work (including concerns about whether some previous approaches crossed into domestic speech territory) has been sincere, and it crosses political lines. And the debate is important, as we navigate delicate issues that will test the boundaries of free speech. But the timing is important as well. A new institutional architecture for this important mission is still being designed. And Iran’s campaign is not pausing while the debates continue.

Wherever U.S. policy lands on the question of combatting disinformation and deepfakes, three things will be true about this conflict when it is eventually analyzed in full.

The primary strategic objective of Iran’s information campaign is epistemic disruption, the deliberate degradation of the audience’s capacity to form reliable beliefs, not persuasion toward specific false conclusions. That is a materially different problem from countering traditional propaganda, and it requires different institutional responses.

The Russia-China-Iran amplification model is a template, not an anomaly. Future conflicts involving any permutation of those actors, or their proxies, will employ variants of this architecture. Convergent anti-Western interests are sufficient to drive convergent behavior. Coordination is optional.

Detection tools are now themselves a weapons category. The fabrication of forensic verification tools to discredit authentic evidence represents a qualitative escalation. Provenance infrastructure, not detection algorithms alone, will be required to address it.

The gap between adversary capability and institutional response is real and measurable. Deepfake incidents through Q1 2025 had already exceeded all of 2024’s total. Bot traffic surpassed human web activity at 51 percent. The information environment is, in a measurable sense, majority-synthetic. Building the cognitive security architecture to operate in that environment is not a platform moderation problem. It is a national security imperative, and it deserves to be treated as one.

Views expressed here are the author’s alone and do not represent the positions or policies of the U.S. Government or the Central Intelligence Agency.

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? Send it to Editor@thecipherbrief.com for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief

Related Articles

The Next Battlefield Is Perception, Not Territory

OPINION – The Gray Zone is no longer a peripheral space between war and peace. It has become the primary arena in which strategic advantage is tested [...] More

Revisionist History – Aliens, Secrets and Conspiracies

OPINION – Over a decade ago, I was a public official and was at one of our commission meetings on the coast of California. A fellow commissioner and [...] More

Should Western Tech Giants Partner With Pro-Hamas Network Al Jazeera?

OPINION — A few weeks ago, Al Jazeera named Google Cloud as its primary technology provider for “The Core,” a sweeping program designed to integrate [...] More

Export Controls Backfire: The China Innovation Paradox

Export Controls Backfire: The China Innovation Paradox

DEEP DIVE — When the Biden administration rolled out its semiconductor export restrictions in October 2022, the logic seemed airtight: cut off [...] More

They're Coming for Our Kids: How Extremists Target Children Online

OPINION — September is National Preparedness Month - when we check our emergency kits, review evacuation routes, and prepare for natural disasters. [...] More

The Country’s First ‘Cognitive Advantage’ Chief: Influence Is the New Battlefield

The Country’s First ‘Cognitive Advantage’ Chief: Influence Is the New Battlefield

WEEKEND INTERVIEW — In an era when foreign adversaries can shape public sentiment with a well-timed meme and a handful of AI-driven accounts, the [...] More

{{}}