Russia has industrialized cognitive warfare, producing synthetic media at scale through a modular system that targets soldiers, civilians, and Western publics with distinct engineered effects. A Chinese frontier AI capable of executing the same doctrine is now freely available worldwide, unrestricted and priced within reach of any actor. The U.S. federal institutions built to track and counter these operations are in transition, with no successor architecture yet in place. A proven adversary doctrine, democratized capability, and an unresolved gap in domestic defenses have arrived together. And a major election cycle is coming this year.
The first thing to understand about Russia's cognitive warfare system, documented by researchers at Sensity AI in April 2026, is that it isn't a campaign. Campaigns have beginnings and ends, specific targets, and identifiable decision-makers who can choose to stop. What the research showed was a production system: more than a thousand AI-generated synthetic videos, organized into three distinct assembly lines, each engineered to produce predictable cognitive effects in a specific target population. Ukrainian soldiers at the front received content calibrated around despair, leadership failure, and the futility of continued resistance. Civilians received content designed to induce sustained emotional fatigue, erode institutional trust, and make Russian terms seem, if not acceptable, at least inevitable. Western audiences received a separate product line focused on questioning the value of continued alliance support and amplifying doubts about evidence of Russian conduct.
The strategic objective of this architecture, as the research demonstrates, is not persuasion. Persuasion requires convincing people of a specific proposition. The goal here is something more structurally corrosive: information chaos. When synthetic content reaches critical mass in an information environment, authentic evidence becomes contestable. Documented war crimes can be dismissed as fabrications. Verified reporting becomes just another narrative competing for attention. The epistemic cost of reasoning accurately under those conditions falls entirely on the target population, not the attacker. The adversary pays almost nothing to create that environment. The people living in it pay continuously.
Russian military doctrine describes this approach as cognitive warfare but more recently researchers have given the operational method a new name: the Narrative Kill Chain. Iran, separately, deployed more than 110 synthetic videos targeting the same Western audience during the spring 2026 escalation cycle. A doctrine developed in one theater is spreading. The operating manual is published, and we should expect other actors to study it.
The three-audience segmentation is not scattershot propaganda. It is deliberate targeting, calibrated to different decision nodes: soldier morale, civilian will to resist, Western political will to sustain support. Content is seeded on TikTok and Telegram, where it builds initial engagement, and then amplified algorithmically across X, Facebook, and YouTube. The platforms' own mechanisms do part of the adversary's work at no cost to the adversary.
The deeper danger is what researchers have called the liar's dividend. Once a critical mass of synthetic media circulates in an information environment, even authentic evidence becomes contestable. Adversaries do not need to win arguments. They need to make the process of resolving truth from falsehood expensive enough that most people eventually stop trying. That objective, per Sensity's analysis, is largely being achieved.
The question worth asking is what it takes, both technically and financially, to execute this doctrine at scale. Until recently, the answer pointed toward state-level actors and resources. That has recently changed.
Are you Subscribed to The Cipher Brief’s Digital Channel on YouTube? There is no better place to get clear perspectives from deeply experienced national security experts.On April 24, 2026, DeepSeek released V4-Pro and V4-Flash as open weights under an MIT license, meaning anyone can download the full model, run it independently, and use it for any purpose without restrictions. V4-Pro is powerful, nearly matching U.S. frontier models, but at a fraction of the cost and offered as open-source. It’s available on a hard drive, permanently, to anyone who downloads it. Independent assessment by the Tennessee AI Advisory Council found that prior DeepSeek models were susceptible to jailbreaking at substantially higher percentages that comparable U.S. models. There is no meaningful indication that V4 represents a departure from that pattern.
The combination is the point. The doctrine is documented and replicable. The tool is nearly free and unrestricted. Any actor with a grievance, a distribution channel, and an internet connection can now pair the Narrative Kill Chain model with frontier-class AI capability. And the empirical research on what that combination can accomplish is increasingly precise: controlled experiments published in Nature and Science found that conversational AI can shift political attitudes by about 10 points in some settings, and in one U.S. test the effect was roughly four times larger than traditional campaign ads. This is not a projected threat. It is a measured effect.
Much of my career was spent studying adversarial capabilities, plans, and intentions. What that experience teaches, more than any specific technique, is to look at convergences. Capability without doctrine is potential. Capability plus doctrine, freely available, with limited counterparts on the defensive side, is a structural condition. That is where we are at the moment.
The United States previously built institutional architecture to address similar threats, but those functions, that resided across multiple government agencies and departments, are now in transition. They have been restructured, downsized, closed, or dissolved, and a successor architecture is not yet in place.
This is not a simple story, and it should not be seen as one. There are legitimate constitutional questions about how the federal government conducts work in this space. The line between detecting foreign synthetic operations and influencing domestic information environments requires rigorous institutional discipline to protect. Those concerns deserve serious consideration and careful legislative design. What the current moment asks is that those necessary governance debates happen faster. The threat is not waiting for the architecture to be resolved.
What any successor structure needs to accomplish is not difficult to specify, even if it is complex to execute. It needs to set standards for the detection and attribution of foreign synthetic content at scale, identifying what is manufactured, amplified, and deliberately targeted at American society. That is an intelligence and technical function, not a content moderation or speech function. The distinction is essential, and it is the one that any new design must protect. These new institutions, when and if created, should never be in the business of adjudicating truth. Their mission should be to ensure that platforms identify content that is synthetically generated, amplified, and aimed at the public. That simply provides the audience with objective data upon which to evaluate what they are reading or viewing, and it can be performed without crossing into censorship. That mission needs a home.
Thankfully, the private sector is not waiting. Companies with deep forensic capability in synthetic media detection are developing attribution tools that operate at scale. The technical capacity to identify AI-generated content, trace distribution networks, and flag coordinated inauthentic behavior is advancing rapidly in the commercial sector. A successor architecture built as a genuine public-private partnership, pairing government authority and classified context with private sector technical capability, may be better suited to the current environment than a purely governmental structure. What government brings that industry cannot replicate is access to intelligence collection on adversarial plans, allied coordination, and the authority to act on attribution findings, when they veer into criminal conduct. What industry brings is speed, scale, and detection capability that is already operating. The two are complementary. What is missing is the design and the mandate to connect them.
Three developments have arrived simultaneously. The doctrine for industrial scale cognitive warfare has been documented, refined, and is spreading across adversary ecosystems. The tools to execute that doctrine have been democratized to the point where frontier-class AI capability is nearly free, unrestricted, and available worldwide. And the federal institutional architecture charged with tracking and countering foreign cognitive operations against the United States is in transition, without a successor in place.
The effects of this convergence are not limited to elections, though elections are the most visible surface. What is at stake is the shared epistemic ground on which any form of collective decision-making depends. When authentic evidence becomes routinely contestable, when any documented fact can be attributed to a fabrication machine that everyone knows exists, the cost of reasoning accurately rises for every person in the information environment. That cost does not fall on governments or institutions. It falls on individuals; in every judgment they make about what to believe and whom to trust.
The perimeter has always existed. What changes is the technology of assault and the capacity of defense.
The country has organized around threats of this scale before. New structures are needed, designed for the technological moment we are now in, with clear mandates focused on detection and attribution of foreign synthetic operations and civil liberties protections built in from the start. Not structures that tell Americans what to believe. Structures that identify what is being manufactured and aimed at them.
That is achievable. And today, it is necessary.
Views expressed here are the author’s alone and do not represent the positions or policies of the U.S. Government or the Central Intelligence Agency.













