SUBSCRIBER+ EXCLUSIVE ANALYSIS - A decade ago, the term “disinformation” was rarely in the national security conversation; today it’s a top concern for virtually all branches of government, and the threat of state-backed disinformation campaigns against the U.S. rated multiple mentions in the 2024 Threat Assessment Report, which was released last week.
The report forecast that Russia and China would both use disinformation to influence the November U.S. elections. “Russia’s influence actors have adapted their efforts to better hide their hand, and may use new technologies, such as generative AI, to improve their capabilities and reach into Western audiences,” the report said, along with an assessment that “China is demonstrating a higher degree of sophistication in its influence activity, including experimenting with generative AI.”
The November elections are only one concern, and state actors are joined by a long list of American groups and individuals using disinformation for their own purposes. Meanwhile, there are profound questions as to what information can be trusted in an age of false news outlets, social media bots, and so-called deep fakes, all of which have been amplified by increasingly sophisticated forms of artificial intelligence.
Recently The Cipher Brief hosted a special briefing on the core questions raised by the toxic mix of those who use disinformation and the increasingly sophisticated tools at their disposal: How best to build institutional trust? Where can people find reliable information? And how can quality information win out over disinformation?
THE CONTEXT
- Director of National Intelligence Avril Haines told the Senate Intelligence Committee on March 11 that the "threat of malign actors exploiting these tools and technology to undercut U.S. interests and democracy is particularly potent as voters go to the polls in more than 60 elections around the globe this year.”
- FBI Director Christopher Wray has also warned that the U.S. faces unprecedented election security threats, noting that AI and other technological advances have lowered the barrier to entry for threat actors to engage in malign influence.
- Russia is increasingly creating fake news websites, which experts say are easier to make with AI.
- China has launched a Youtube network for pro-China and anti-US narratives, in the first influence campaign known to use AI voices with video essays.
- The FCC banned robocalls using with AI, after a robocall used an AI voice clone of President Biden.
- Microsoft is working on a program to counter deep fakes and help protect election integrity.
- Meta will require political advertisers to disclose AI-generated content.
THE BRIEFING
The Cipher Brief tapped deeply experienced experts in the field of information technology to assess the threats - and possible solutions.
This excerpt of the full briefing has been edited for length and clarity.
The Cipher Brief: How do you define “quality information”? It’s going to mean something different to everybody.
McCarthy: It's coming up with a definition that everyone agrees to. Is it timely? Do we understand the sourcing? Is it transparent? Is it original? Just some basic characteristics that everyone, regardless of what your views are on an issue, regardless of your bias, will accept.
Using the food analogy, we all know what genetically modified foods are, what trans fats are and what sugar is. So regardless of your views or your thoughts on something, what do you think qualifies as information that meets certain quality standards, is relevant and timely and transparent, and we understand the sourcing? I mean, there's a whole lot of characteristics, but those are some big ones that I think we all would agree would define “quality information."
Lee: I would pick a few categories. Provenance - where does it come from? A lot of the threats fall through the cracks, because a lot of actors can pretend to be somebody they are not - Russians pretending to be Americans and trying to push content, putting a lot of mistrust in our institutions and so forth.
Another is consistency of the information - meaning, somebody may be talking about sports, but that person also may inject certain political content into the stream of content. So whether it's consistent or not. Number three, transparency. And that is, Hey, when this person says something, does it really come from this source? When somebody says it's coming from this scientific source, is it really coming from the scientific source? It's about whether the context of the content is transparent to the end user. If somebody wants to consume conservative content, that's fine, but I want that conservative content to be transparent in terms of, Hey, it came from our country. There was no external amplification for this specific content. And that the content itself has legitimate sources and it's not pretending to be something else.
Even though we're trying to detect and alert the public about misinformation and disinformation, ironically the end result is that the perceived veracity of mis- and disinformation ends up going up. We are trying this whack-a-mole strategy, and it’s just not working. Unless we have certain standards, certain measurements to drive traffic to quality information, especially in the age of AI, I don't think we can find reasonable solutions where information supports trust in institutions, trust in governments, trust in businesses, and so forth.
The Cipher Brief: It strikes me that there's a level of demand for quality information, but there's also a level of demand and maybe even a push for information that's not quality, and that sort of goes against those standards. There's a whole range of folks that actually benefit from information that isn't quality.
McCarthy: This is the biggest question we get, about the demand side, especially from media organizations, where we say we believe there's a demand for quality information and we'll typically get into this debate over whether there really is. Part of what we're doing with the Trust in Media Cooperative is to prove the thesis that there is a demand, that if people are given the option of looking at something that's based on standards that everyone agrees to, they will actually then go look at that information in a very measured way.
But there's a lot of folks right now that say there is no demand for quality information. I just don't buy it. I think there absolutely is a demand, and that's just based on my little universe of people who tend to pay subscriptions to multiple news sources so they can get information that meets their standards. So imagine a world where you don't have to subscribe to five newspapers or four journals or ten blogs - but you actually have a place you can go to if you have a question on something, be it elections or science or economic equality, that you actually can go to sources that meet those standards, that meet those expectations.
The Cipher Brief: Let’s talk about election integrity. What are some of the practical things that the Trust in Media Cooperative is doing?
McCarthy: We decided we would start with a policy area and spend some time identifying three or four baseline standards or characteristics of quality information. And the policy area we picked was election integrity. It's not only an election year here, it's an election year pretty much around the world. So it's a very big topic.
There's lots of interest. Everybody cares about election integrity. They may care about it for different reasons, but everybody cares. And so it just seemed like the right area to start. It's not about the politicization, and it's not that we necessarily care about who you vote for or what you think. It's not about moderating content or moderating opinion or bias. It's merely about, Can we identify some baseline characteristics of what quality information is in the election integrity space? Can we gain some broad understanding and agreement on this? And when I say broad, it's not just the AI companies that are developing algorithms and platforms that are moving data - it's from academia, from educational organizations, so that everybody understands what the standards are for quality information and agrees to it. I know what GMOs are in food, so can we gain that level of awareness and then test it?
There's lots of standards that are out there right now, by the way. Journalism has its standards, academia has its standards, but let's see if we can agree on a few, and accept that these are the ones we're going to start with on quality information related to election integrity.
And then in a parallel path, we're working on a tool to help us see if people will actually demand quality information for election integrity. We're building a dashboard that we're modeling off the dashboard that Johns Hopkins developed during the pandemic - when I was at the State Department, I was an avid consumer of that platform as a means to identify where infectious rates were spiking - and so let's see if we can do the same thing for this topic, election integrity, using this dashboard. We plan to have a proof of concept out and deployed this summer with the goal of - before the next election - being able to actually report some of the measures.
We anticipate that the biggest consumers of this tool will be state and local election officials. Right now, the states are the ones who oversee elections, and it's done differently in every state. It's overseen differently in every state. There's really not consistency or information-sharing across states. The Department of Homeland Security does everything it can in order to help the states run elections fairly, but here would be a tool that everyone could use just to get a better sense of what's really happening at polling sites.
Lee: One of the most commonly used disinformation techniques is essentially content laundering. That is, people may use images, audio, or videos from different contexts and then use them to disrupt certain public events or public processes and so on. We are already seeing this factor quite a bit in the Israel-Gaza conflict. People may use images from Afghanistan or Iraq and then, using different subtitles or adding different hashtags, essentially push that content out to serve their political purposes.
This will be really hard to push back on, let's say, 24 hours before election day, 48 hours before election day. Imagine people using publicly available AI models to generate images and audio and just pumping them out to chat rooms, social media platforms and places like that, just 24 hours or 12 hours before election day. I've seen this happening in other countries, and current techniques, current technology or current content moderation policies do not work.
The idea here is, Can we essentially build a baseline where people just come in and check what's the most important, reliable, transparent, and accurate information about our political process? It's not very complicated, but right now we don't have that baseline information portal, so to speak.
And this is such a big threat factor in my mind. The way we can fight this is not trying to detect every attack, every threat, but instead finding a way to push out the most reliable, the most transparent, the most accurate information about our election and thus satisfying the demand for quality information.
As for the demand for quality information, whenever I get the question, I get a bit irritated, because look at how much money The New York Times is printing right now. The demand is there. Globally speaking, the subscription model market, last time I checked, was over $300 billion. I spend over $200 a month because I'm trying to find quality information as efficiently as I can. Problem is, this means these options are not available to everyone in our country or around the world. But I believe the demand is there.
But unless we treat information as a critical infrastructure, I don't think we can flatten those peaks and valleys in the information environment.
The Cipher Brief: That's an intriguing concept. You often hear about income inequality, but you're talking about information inequality.
Lee: Absolutely. Going back to Ellen's analogy, think about organic food. Think about vegetables and fresh fruit and produce. Even in the richest country in the world, our country, we have so many food deserts. The problem is that we have so many information deserts in our own country, a lot of vulnerabilities and gaps that malign actors can easily take advantage of. The only way we can fill those gaps is treating information as critical infrastructure. And that way, quality information is more broadly available to more people.
McCarthy: It really gets to that whole local piece. A Gallup survey highlighted that Americans trust nothing, no formal institution of any kind - except anything local. And at a time when Americans do trust anything local, our local news network is dying. We have so many news deserts in our country right now, and they're not even just small towns, sometimes big cities.
And so imagine the ability to potentially rebuild that, maybe not the way it was before, but imagine being able to create a network of trusted information sources at the local level. And the reason to do that is multifold. We already know that the data shows that in areas where there are news media deserts, people tend to be much more inclined to be recruited to extremist views. They tend to be less civically involved.
Perhaps the most troubling statistic is the overall economic well-being of the town. A town's ability to secure bonds or loans to support its people is dramatically lower in areas where there isn’t some sort of local media capability. And so if somehow you can create an understanding about quality information and make it accessible - and it's not going to be everybody who wants it, but if you could get to enough people - I think that's where you're going to start seeing some change.
The Cipher Brief: Can you think of any examples where misinformation was used to protect or defend, rather than to undermine or destroy? Is there ever a utility in mis- or disinformation?
McCarthy:A white lie?
Lee:That's a really good question. We can do certain things perhaps to protect reporters, activists, or even military personnel in certain environments. Yes, I think there's a place or time for that, but we have to be very clear that whenever we do anything like that, it’s strictly governed by oversight, certain regulations and laws, to ensure that function is not misused.
The Cipher Brief: An example came to mind - World War II, when Britain created essentially a fake invasion force to deceive the Nazis about when and where the D-Day invasion was going to take place. So was this mis- and disinformation? Of course. Was there a utility in terms of American, British, Canadian, French, Polish lives saved? Absolutely. Is that something that we want to see applied in the public sphere around elections? Of course not.
Lee: Back to the issue of provenance of information, let me give you a quick example of how this can be done. I think a lot of us have used reverse-image search engines to authenticate whether an image is used in the right context or not. Of course we can do that manually, but it turns out that AI models can track and establish the provenance much faster than ever before. So let's say somebody's using a picture of a riot and then puts some headlines or subtitles to that image, to suggest there is massive violence taking place at a specific ballot station to perhaps induce voter suppression.
Now we can show in nanoseconds that that is in fact inauthentic, misplaced and out of context. So we can actually show this degree of provenance with lots of information at scale and speed.
The Cipher Brief: Is there a way that media outlets or information sources can be independently validated?
McCarthy: You already have that in different organizations that have created journalistic standards and have members who abide by those journalistic standards. So that's already there. The challenge is can you get the consumer to agree that those are valid and accepted standards that they want and that they will trust?
AI companies right now are developing standards for trusted AI. But the question then is, Will every other sector abide by those standards? Do they agree to those standards and will they change their consumption habits based on those standards? Call me a little jaded, but I have a hard time believing that the AI companies or any sector will govern itself.
When you look at the history of our critical infrastructures, almost all of them started as self-organized private sector capabilities, and then government has come in and partnered. And I think you can apply the same to this information problem. At some point government will have to come in. I just don't think government right now is in a position where they can lead on this.
The Cipher Brief: And with almost anything that it does, it runs into accusations of creating a “ministry of truth,” right?
McCarthy: Exactly. And we're not the ministry of truth.
The Cipher Brief: How do we protect information as critical infrastructure while also protecting the First Amendment? Can these run into conflict?
McCarthy: For my optic, this is not a first amendment issue. It's about the data, it's not about the content, which is why I think content moderation is not working. I don't want anybody to tell me what I can and can't look at.
People pretty much know what they want and know what they don't want, but the big issue is trust in our system. So if you start not believing even quality information because you just believe it's all baloney, then you've got a real problem. And so from my optic, that’s the biggest issue at this point. Although as AI gets better and scales broader, we may see some different outcomes given how people view elections right now.
Lee:I think information quality will set free free speech, pun intended. That information quality will actually enable and empower free speech in my mind, because now we can get rid of all the opacity involved with free speech. So if you understand where your free speech is coming from, if we know that it's not a foreign source, we know that sometimes it's a fact, sometimes it's an opinion, sometimes it's just pure fiction - once we have that transparency in our information environment, I think we'll enjoy more free speech without the baggage that has essentially undermined the utility of free speech in our country. So my position is, without information quality, there is no true free speech. It’s a simple as that.
McCarthy:And again, using the food analogy, you can still eat your McDonald's hamburger even though you know that maybe you have another food choice. So it's the same thing with information.
The Cipher Brief: Last question: What are your views on Wikipedia as a potential model for source consensus?
McCarthy: I do like Wikipedia. I like the concept of crowdsourcing of information and it is a valuable tool. The one aspect of Wikipedia, though, is that if you talk about the standards, some of it involves timeliness. And Wikipedia sometimes is not the most timely source. It's a great source. And because it's crowdsourced, eventually you might get to information that meets the standards that we're talking. But it's one potential source of quality information.
Lee: I think it's a great model. Having said that, in the age of AI, data poisoning is becoming quite easy. That means if I know where information is coming from for a specific platform, the specific source of information, I can generate tens of thousands of blogs, and fake news sites, right? Pseudo-scientific forums, q and a sites, literally I can do that today, right? And then this information ecosystem may look like it's crowdsourced - but I can manipulate the system really easily in the age of AI.
I think this is the most critical conversation of our generation. How do we restore trust in institutions in the age of AI and in the age of strategic competition? And if we fail to restore trust in institutions at home and abroad, this is not happening in a strategic vacuum. China and its proxies are also very active in shaping global standards and global norms and global data measurements on what constitute information. And they have their own standards, their own regimes. So yes, we're focusing on our own country. Yes, we're focusing on election integrity. Having said that, information is a critical infrastructure, and to me it is the most critical fight of our generation at home, but also globally speaking.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief