DARPA’s Focus on ‘Manipulated Media’ Lays Out Technology for Combatting Disinformation and more

By Walter Pincus

Pulitzer Prize Winning Journalist Walter Pincus is a contributing senior national security columnist for The Cipher Brief. He spent forty years at The Washington Post, writing on topics that ranged from nuclear weapons to politics. He is the author of Blown to Hell: America's Deadly Betrayal of the Marshall Islanders. Pincus won an Emmy in 1981 and was the recipient of the Arthur Ross Award from the American Academy for Diplomacy in 2010.  He was also a team member for a Pulitzer Prize in 2002 and the George Polk Award in 1978.  

OPINION — “A couple things I haven’t talked about that might be worth exploring is, you know, the manipulated media.”

That was Dr. Kathleen Fisher, Director of the Information Innovation Office at the Defense Advanced Research Agency [DARPA], speaking about the agency’s ongoing investments in Artificial Intelligence [AI] research related to national security during her December 21, appearance on The Gradient Podcast with host Daniel Bashir.

Fisher’s office oversees most of the DARPA’s AI-related computer science research and development efforts, including, currently, the AI Forward initiative which seeks new AI research that will provide secure systems for national security missions.

Since 2004, DARPA has been holding Grand Challenge competitions with millions of dollars in Congress-authorized cash prizes for high-payoff research. On the podcast, Fisher described the purpose of the Grand Challenge was to galvanize “a whole research community to move the technology in a huge leap forward — that’s kind of how DARPA measures success like creating a massive step forward.”

That first DARPA Grand Challenge, with a first prize of $1 million, got major national research organizations to advance development of autonomous ground vehicles which the Defense Department (DoD) had been working on for years. The winner would have to complete a 150-mile substantial off-road course within a limited time. None of the robot vehicles in 2004 finished the route.

The 2005 Challenge, with a $2 million prize, saw five complete the 130-mile course. In 2007, the Challenge had a totally urban 60-mile route to be completed in six hours with the unmanned cars having to obey traffic lights and stop signs.

The industry agreed that the DARPA Challenges not only aided DoD’s world lead in automated vehicles, but also helped develop the unmanned domestic car business.


Cipher Brief Subscriber+Members enjoy unlimited access to Cipher Brief content, including analysis with experts, private virtual briefings with experts, the M-F Open Source Report and the weekly Dead Drop – an insider look at the latest gossip in the national security space.  It pays to be a Subscriber+Member.


The newest DARPA Challenge, announced last August, is a two-year competition called the AI Cyber Challenge (AIxCC), aimed at creating a new generation of cyber security tools.

Fisher describes it on the podcast as “using whatever AI kind of technology — whatever technology competitors want– to be able to automatically find and fix vulnerabilities in software.” Fisher said motivation for this Challenge came last February, when Director of National Intelligence Avril Hines told Congress that in the event that China were to invade Taiwan, Beijing is very likely to carry out advance destructive cyberattacks against U.S. civilian infrastructure.

Fisher said the U.S. has “a massive digital attack surface. A lot of that is open source software. So, how could we make that attack surface less vulnerable really quickly? Well, the hypothesis is we could use AI to find and fix a lot of the low-hanging fruit really quickly so that’s the goal” of the new Challenge.  And companies like OpenAI, Anthropic, Google and Microsoft are partnering with DARPA to make their resources available to competitors. Fisher added, “We’re giving them access to state-of-the-art tools on a problem that’s of critical importance to national security to see what they can do.”

As an example of what DARPA has already done, Fisher talked about the Semantic Forensics [SemaFor] AI program that has been ongoing for some eight years now.

“One of the things the SemaFor program has been doing, for example, is building up defensive models for people of interest, Fisher said. “So when…the fake video of [Ukraine President Volodymyr] Zelensky [came out] saying that Ukraine should surrender, that’s the kind of thing that the SemaFor program has a defensive model for famous people.”

Fisher described how the SemaFor notes “anybody when they talk, they have idiosyncratic facial movements. Like the side of my lip might move up when I say the word hello for example and the researchers on the SemaFor program have developed models that you can have a certain amount of video of that person talking — that they can then build models of that person that when you then have a purported video…you can apply the model to the video and see whether it’s that person or not.”

The SemaFor program has been developing that kind of technology for video, for audio, for still pictures, etc.,” Fisher said, “that a lay person can look at the outputs and see for themselves very quickly if something’s been manipulated or not, to be able to build trust in video.”

Since 2018, DARPA has invested more than $2 billion in more than 30 programs aimed at the exploration and advancement of a full range of AI techniques while roughly 70% of DARPA’s current programs benefit from AI and machine learning technology.

One of them is Fisher’s AI Forward Initiative to advance AI for national security purposes.

It began with two workshops, one in person, the other via Zoom. “These were participatory workshops,” Fisher said, “where people do a ton of brainstorming and they do work in the conference in the workshop that is then the product of the workshop.”

Fisher added, “We selected the people that we thought would contribute the most to the workshop, either in terms of their background, or in terms of the idea that they had, or in terms of what they might bring to the workshop. We wanted a broad diversity of perspectives, a broad background, because we thought that just having as many different kinds of voices represented in the room would get us the best brainstorming activities set of perspectives.”

Fisher said she told the workshops that AI was “opening up all sorts of new applications with all sorts of ramifications for national security and we should be exploring those applications. Everything from being able to file routine reports, write routine reports much more quickly to possibly being able to do multi-level security and consume to write intelligence reports more quickly.”

Fisher also pointed out that with AI “there’s all sorts of new threat models that have to be dealt with too — everything from over trusting bias issues, fairness issues, the problems with poisoning and adversarial AI to what I call agents run amok.”

In the latter category, Fisher described, “Very soon we will have AI enabled agents that are fluent, that are persuasive, that are connected to the internet, which means to a first approximation, connected to everything that can write code and that can cause that code to be executed, which means they can take actions both in the digital world, but also in the physical world. How do we make sure that those agents don’t cause things to happen that are really bad?”


It’s not just for the President anymore. Cipher Brief Subscriber+Members have access to their own Open Source Daily Brief, keeping you up to date on global events impacting national security.  It pays to be a Subscriber+Member.


Videos and white papers came out of the first workshops and then there was what Fisher called “a bridging workshop where the people from the first workshop and the second workshop got to talk to each other…that also produced a similar number of videos and a similar number of white papers.”

Those papers were given to all the DARPA Program Managers, including those in Fisher’s office, in order to select two or three for financing as small AI Exploration (AIE) programs.

Two have been selected so far.

One is called FoundSci and is about creating a tool to help scientists. Fisher said the idea is “could an AI agent read the literature and figure out for itself what it needed to learn and propose Scientific Hypotheses and be able to accelerate the pace of science.” The DARPA program notice, released publicly, described it as “DARPA seeks to invest in focused explorations of what the AI science and technologies of the future could be.”

The second proposed AIE, Fisher said, “is about defending things and this is called FACT and that stands for Friction for Accountability in conversational transactions to help us develop an appropriate level of trust in what the chatbot is thinking.” Friction in computer terms can mean to mitigate unintended consequences in high-risk scenarios.

DARPA’s AIE proposal released November 15, says, “The FACT effort seeks to explore, develop, and evaluate human-AI conversation-shaping algorithms that: 1) capture mutual assumptions, views, and intentions based on dialogue history, 2) auto-assess the consequences of potential actions and the level of accountability for responses, and 3) reveal implicit costs and assumptions to the user, prompting critical analysis, and proposing course changes as appropriate.”

Put more simply, as it later says in the DARPA November proposal, FACT seeks to provide a “‘learning and verification’ opportunity where one member can point to defects in others’ positions (i.e., playing the devil’s advocate) and present alternatives in terms of relaxed or changed assumptions or emphasis.” Essentially, it’s one computer program trained to check another computer program.

When dealing with computers, “people tend to over trust,” Fisher said, and what the search here with FACT is for a program, algorithms referred to above, that “will actually investigate a little bit more and think about this a little bit more carefully before we actually act on things that we’re being told.”

Summing up at one point, Fisher said, “DARPA doesn’t do policy, DARPA does technology and adding sort of tools…arrows to the quiver of the policy makers, giving them options that they don’t already have. If we don’t explore whether the technology is possible and have it as an option, then they can’t introduce it.”

The Cipher Brief is committed to publishing a range of perspectives on national security issues submitted by deeply experienced national security professionals. 

Opinions expressed are those of the author and do not represent the views or opinions of The Cipher Brief.

Have a perspective to share based on your experience in the national security field? 

Send it to [email protected] for publication consideration.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief


Related Articles

Search

Close