BOTTOM LINE UP FRONT — When word first came last week that China’s AI startup DeepSeek had launched an artificial intelligence (AI) assistant that could compete with top-tier U.S.-made models – and that it had done so for a small fraction of the cost – the news sent shudders through the American tech sector and caused a selloff in the stocks of some of the biggest U.S. tech companies. How had China caught up so quickly, particularly given recent U.S. export controls aimed at slowing China’s progress in the AI space?
For all the business and market interest, the national security implications of the DeepSeek story garnered less initial attention. Now those have crept to the fore, with comments from the White House, the U.S. military and experts outside of government questioning whether DeepSeek could pose risks to the U.S.
The Cipher Brief spoke with three experts on the intersection of technology and national security about the potential national security risks in the DeepSeek breakthrough and what can and should the U.S. do to address those risks?
THE CONTEXT
- Chinese AI startup DeepSeek launched a free AI assistant, DeepSeek-R1. The company says the model can compete with top American competitors at a small fraction of the development and usage costs. Days later, Chinese e-commerce giant Alibaba released an updated version of its Qwen 2.5 AI model.
- U.S. AI Ieaders saw their stocks tumble on the news of cheaper AI models from China. Tech giant Nvidia’s stock fell nearly 17%.
- White House press secretary Karoline Leavitt said the National Security Council was reviewing the national security implications of DeepSeek, calling it a “wake-up call to the American AI industry.”
- The U.S. Navy told its members that it is “imperative” that they not download or use AI technology from DeepSeek “in any capacity,” warning of the “potential security and ethical concerns associated with the model’s origin and usage.”
- Microsoft and OpenAI are looking at whether DeepSeek used OpenAI models to train and develop its own models. White House AI and crypto czar David Sacks said DeepSeek may have used a form of intellectual property theft known in the AI sector as “distillation,” which can be used to make “copycat models.”
- At the same time, Microsoft has made DeepSeek’s R1 AI model available on its Azure cloud computing platform and Github tool for developers.
The Cipher Brief spoke with Rick Ledgett, former Deputy Director of the National Security Agency (NSA); Jennifer Ewbank, former Deputy Director of the CIA for Digital Innovation; and Chip Usher, Senior Director for intelligence at the Special Competitiveness Studies Project, about the national security implications of the latest Chinese AI breakthroughs.
Their comments have been lightly edited for length and clarity.
THE EXPERTS
The national security implications of DeepSeek’s success
Ewbank: This is a Chinese company, and Chinese companies operate under very specific national security laws in the People's Republic of China. And so what comes to mind first are issues of data privacy and potential espionage. I'm not asserting that DeepSeek is sharing all the data with the government. But if the [Chinese] government asks, they are obligated, they are compelled to do so. There is no choice. And so people need to keep that in mind as they're looking at whether they're going to use this capability. I certainly would not recommend it for most people in the United States. And we've already seen that the U.S. Navy came out very quickly and said, it is banned. It is not allowed for any members of the service to use it.
Next I think about influence operations, and its potential use to scale the ability of bad actors to send influence and messaging to foreign audiences. And if we step back and think about this new model trained on data that is acceptable to the Chinese Communist Party, that's going to be of limited scope. And so does that introduce some kind of ideological bias over time? If this is a model that's used widely in countries around the world, what does that mean about access to information, access to other ways of thinking? Could that over time actually lead to an ideological bias that replicates itself?
Then there's a lot of talk about security vulnerabilities in R1, the new [DeepSeek] model. I haven't seen the detailed analysis of that yet, but there is speculation that it can be pretty easily jailbroken, the idea being that bad actors could use this open-source model for really scary things. Ransomware, that's one. But there are lot of other things that an AI model could be used for that are pretty awful – learning how to make improvised explosive devices, or biological weapons. Most AI models today are built with safety in mind, and those things are very hard to do. But if you have an open-source model that can be easily subjected to jailbreak, then what happens? There's a dual-use risk with any kind of AI, any emerging technology generally. So it could be used for good, or could be used for evil.
And then the last concern is that in the broader strategic competition between the U.S. and China, AI is right at the heart of that strategic competition. We tend to think about things like weapons systems and the size of our military and how many ships we have – and all those things are really important. But digital tech and specifically AI is right at the heart of this competition between two competing views of the world. And so what are the implications of a surprise development, this “Sputnik moment” for China? What are the implications of that for the strategic competition between the U.S. and the PRC?
Ledgett: First off, China is in charge of this. If anyone believes that the Chinese government doesn't have their hands in this, they haven't been paying attention for the last decade. They absolutely do. They absolutely have access to the information that people are putting into DeepSeek, and to the results that come back.
And so if you think about it, it's kind of an elegant way to do intellectual property theft. Basically, people are giving you their information to run on your machines, and getting the answer back. You get the raw information, and you get the answers. So it's a very efficient way to do it. The other thing is the personal data that the system harvests. And if you look at the privacy rules, there really aren't any. It's very vague and it's very imprecise in what they say. And so your data is not really your data anymore.
Another thing is the Chinese intelligence support law that requires Chinese businesses to provide whatever support the state asks for. It's very different from the American version, where we can ask for assistance and they may or may not provide assistance, but we can't compel them without going through a lot of legal machinations. The Chinese system is very different. A minor bureaucratic functionary can direct companies to do things, and can actually impose pretty severe consequences on the company if they don't – to include putting people in jail.
So all those things are reasons why the Navy's guidance that came out was really good, where they said, We don't want you to use this, not even on your home computers. And I think that would be hard to do in the civilian world, to say, I don't want you to do this on your home computers.
The intersection of technology, defense, space and intelligence is critical to future U.S. national security.Join The Cipher Brief on June 5th and 6th in Austin, Texas for the NatSecEDGE conference. Find out how to get an invitation to this invite-only event at natsecedge.com
Usher: We have entered a period of intensified innovation competition, primarily between the United States and the People's Republic of China, but not those two actors alone. This is emblematic of the sort of thing that we're going to face more and more frequently in the years ahead.
And it's not the first time. We don't have to go that far back to other recent examples of Chinese breakthroughs that shocked the marketplace and shocked the national security community. Think back to 2023, when Huawei released their Mate 60 Pro smartphone, timed perfectly to when our commerce secretary was paying a visit to Beijing. And the performance of that smartphone was remarkably good, though it has since proven not to be as good as our best smartphones. But it certainly was a strong showing and it shocked people.
DeepSeek is similar. I think the upshot is, it shows how quickly China can close the gap if a gap exists. And that ought to be concerning to the United States, not just for commercial and competitive reasons, but also for national security reasons.
We as an intelligence community need to devote more time and attention and resources to what we have coined as techno-economic intelligence. This is nothing especially new to the IC – it's been done for years – but it needs more attention and more resources, so that policymakers down the line are not caught by surprise by events like this. I'm not privy to the current intelligence reporting, but judging by what we've seen in the open source realm with regard to the market's reaction, and policymakers' reactions, I would hazard a guess that our intelligence community did not identify DeepSeek as on the threshold of a major AI breakthrough. Where we would like to see the IC land here in the not-too-distant future is to do exactly that sort of thing. So it needs to shift how it collects against techno-economic issues like this, who it collects against, and how it goes about analyzing and assessing that data.
How should the U.S. business and policymakers respond?
Ewbank: There's a wake-up call here. In the opportunity space, maybe this will generate some interesting developments here in the U.S., focused on cost efficiency of training new frontier models – certainly some really interesting work on energy efficiency, which is a huge challenge. So maybe that opens up new space there. I think it highlights the value of open source in digital tech, and how that can be super helpful in that collaborative effort between government and companies. Maybe that opens up some opportunity there.
But in terms of a national security response, it should encourage the administration and Congress and others to think about the impact and the role of export controls. Maybe the controls are as they should be, but the PRC did demonstrate that having that constrained environment actually prompted some creative thinking. So let's think about how we use export controls, and if we use them, we can't just say, OK, we're done. People innovate. And this highlights something that's been an issue for the last couple of years: many people in the U.S. national security community still view the PRC as simply a smash-and-grab, steal-data, steal-intellectual-property kind of operation. But in fact, real innovation is happening today in the PRC. This puts that issue front and center, and we need to think about that.
The last thing I would mention is that this highlights to me the critical importance of AI safety and governance. And these are the dry topics that people don't like to talk about, but those are the things that are going to prevent us from having robot overlords in the future. And so we really have to think about that. All the other things aside, if this development shows that new models can be developed rapidly and at low cost, what does that mean for the broader race towards artificial general intelligence? Does that mean that the timeline collapses a little bit? If it does, that just means that getting safety and governance right is all the more urgent, and we really need to focus on that.
Looking for a way to get ahead of the week in cyber and tech? Sign up for the Cyber Initiatives Group Sunday newsletter to quickly get up to speed on the biggest cyber and tech headlines and be ready for the week ahead. Sign up today.
Ledgett: I think the nation needs a privacy law. I think we need a law that provides some structure, as opposed to the patchwork quilt we have of state laws that vary quite a bit. Probably the most impactful one is the California law. And I think once we have that, then we can dive into what that means for individuals, and you can opt in or opt out as you choose. And people should have that choice. I don't care a lot about my personal information, because there's not much of it. There's a lot of it that's already out there. So I don't worry about that as much, but lots of people do worry. And so people ought to have the ability to do that.
And I think there's space in this discussion to change the way that we do end user licensing agreements. When you download a piece of software, you have to click to say, I accept, and 99.9 % of people just scroll down to the bottom and click the button and say, I accept. I've done it, you've done it, everyone's done it.
And that's because these are written by lawyers for lawyers. And so they're 55 paragraphs long, and the part you care about is broken up between two or three paragraphs, and it's written in language that most people can't understand. They need to distill that and have a one-paragraph summary that says, Here's what this means. That would be something I think would be really useful for people to have. But we don't have that.
People fear what they don't understand, and the overwhelming majority of people don't really understand AI. And so that's going to come with time. And I think entities like The Cipher Brief have a role in that, and in helping educate people on AI.
Usher: DeepSeek’s code is openly available for others to unpack and understand, unlike OpenAI and Anthropic and some of ours, which are closed source. This is a real boon to hostile cyber and disinformation actors around the globe. China, Iran, North Korea – these actors are now going to have available to them at very little cost, with very little effort, access to a very capable AI model, and they can iterate on that to develop specialized models to conduct cyber and disinformation ops at scale. And it's already happening – I saw a recent report that there are literally dozens of cyber actors inside China that already have been using Google's Gemini AI to write malicious code and to search for network vulnerabilities in foreign countries. Well, now with DeepSeek’s model, this capability is going to be diffused even further, and faster. So we need to guard up against AI-enabled cyber and disinformation ops, even more than before.
And the last point I would make is that the accusation that DeepSeek may have learned or benefited from U.S.-developed models brings up the point that we need to be able to defend national security AIs when we create them. We cannot have Chinese users logging on and cheating ahead, to train their models based on national security U.S. models. It absolutely raises the importance of defending our systems.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.