
For U.S. Defense Industry, These Minerals Really are ‘Critical’
DEEP DIVE – Coverage of the U.S.-China tariff war has focused on the impact for consumers – the potential for spikes in the prices of […] More
EXPERT Q&A — Chinese AI startup DeepSeek shook the U.S. tech and business communities this week, with its launch of a free AI chatbot that it said could compete with major American competitors at a fraction of the cost. The DeepSeek assistant overtook ChatGPT in downloads on Apple’s app store on Monday, prompting a market frenzy that saw a tumble in the shares of major U.S. AI leaders, including Nvidia, Microsoft and Alphabet. On Wednesday, Chinese e-commerce giant Alibaba released a new version of its AI model, claiming it surpasses DeepSeek’s release.
The DeepSeek and Alibaba models suggest that China has worked around U.S. measures to restrict Chinese access to American-manufactured chips, and the disruption has prompted scrutiny by U.S. tech leaders — Microsoft and OpenAI are both investigating whether DeepSeek harvested data in an unauthorized manner from OpenAI’s technology, as well as the U.S. government. White House Press Secretary Karoline Leavitt said the National Security Council is “looking into” potential national security implications from DeepSeek.
All these developments raise questions about the role of AI in the tech race between the U.S. and China. The Cipher Brief spoke recently – before the DeepSeek story broke – with Retired Lieutenant General Michael Groen, to discuss the impact and development of AI, particularly the China questions, and AI’s applications in the military and national security space. Lt. Gen. Groen, who served as Director of the Joint Artificial Intelligence Center at the Department of Defense, told us he worries less about the technical challenges in AI development than the need for the U.S. military to think about integration, practical applications, and the cultural change he believes is needed for adopting AI.
“I think a lot of Americans are still fixated on the large language models, and ignoring the broad application that really is going to start to influence us across almost everything that we do,” Lt. Gen. Groen said. “We can’t imagine right now what the scale enterprise is for our military capabilities, for our intelligence capabilities.”
Lt. Gen. Groen spoke with Cipher Brief CEO Suzanne Kelly for an episode of The State Secrets Podcast, which you can listen to on Spotify or Apple Podcasts or watch on our YouTube channel. Their conversation has been edited for length and clarity.
Kelly: Give us your overall impression of the impact of AI and how quickly we all – private sector and government – need to understand the implications.
Lt. Gen. Groen: It kind of depends on where you poke the elephant, and you get a different sensation. We are at a transformational point within this broad transformation, and what I mean by that is we are moving past a fascination with large-language models. It’s almost more social media than tech media when we talk about the personalities, the wealth being generated, corporations that might be making money — this has become a soap opera in much of the media space.
Let’s start talking about the broad implementation focus. How can we start to do things at scale? This is especially relevant in the Department of Defense and other large organizations. I think a lot of Americans are still fixated on the large language models, and ignoring the broad application that really is going to start to influence us across almost everything that we do.
Kelly: How worried are you about the U.S.’ ability to remain competitive when it comes to understanding how important it is to implement AI?
Lt. Gen. Groen: I have to start with the “Vitamin I” deficiency, “I” being imagination. We can’t imagine right now what the scale enterprise is for our military capabilities, for our intelligence capabilities. Building those scaled enterprises requires a vision. When you can see it, when your workforce can see it, when your customers can see it — as in, I’m going to get this product delivered to me in this way, and that’s exactly what I want, or I can tweak it if it’s not — when you get to that place, now we’re really starting to launch implementation at scale. I see lots of conversation about AI today and think [we are] digging into niche issues that in the big scheme of things are probably not all that important, when we should be actually implementing and practicing our large-scale data environments.
So how do we get started there? Where’s our gym that we can go to, to actually start doing this at scale? We will have lots of bad behaviors and lots of tribalism on the front end that we need to learn to overcome. Honestly, I’m not concerned about the technical stuff. We’ve got plenty of smart people who can do that. But how do we move the culture of the Department of Defense and the IC [Intelligence Community] into something that’s integrative, rather than tribal? Many have tried, many have tilted at this windmill. We have to do it again, though, because this is the difference between success and failure, specifically with reference to China.
Kelly: Five years from now, where does the U.S. military need to be, and what is the first step to get there?
Lt. Gen. Groen: It sounds very foreboding – we have this massive project to do. But AIs can start really small. There’s still the conversation about, Hey, did you hear AI did this or AI did that? Humans have used AI to identify objects in imagery and do assessments in GEOINT and every other “INT”. All of these things are happening now, but still, society writ large – at least American society – still thinks of this through the lens of, This scary thing called AI, it has its own will, it’s doing its own thing. That was a useful conversation when we were all getting used to this, but we’re way past that. We have to be past that.
Now let’s talk about the practicalities of a data environment and things like, is the Air Force going to let the Marine Corps use this data for this algorithm? All the complexities of that data mesh environment and how we actually make that go, the only way to make progress here is getting reps and sets, doing it – this exercise, this application out in CENTCOM, this application here and there. When we actually employ this, even for demonstration purposes, we move the ball forward. If you’re familiar with GIDE, this is the Global Information Dominance Experiment — and we started that years ago — that was a demonstration capability, where the department would go out to a combatant command, actually identify problems to be solved, and then actually code it and build an infrastructure for that. [It had] example after example, so that pretty soon you actually start to think systemically, because you have so many examples. That energy of real application is so important. We cannot fixate on policy and technology and culture. We have to fixate on problem solving and application.
Kelly: And information sharing. Without going too deep into it, can you give us a sense of how that works, and why that’s such a critical component to having AI be an effective part of the U.S. military strategy?
Lt. Gen. Groen: It begins with the tribalism inherent in the Department of Defense. When they built the Pentagon, they knew there was going to be a Space Force in the 1950s, so they built the Pentagon with five sides, which means there are five corners. So each service has a corner that they can back into and protect their data and their money from the other services. I joke, but that very tribal culture is derived from our origins of buying things, tanks, airplanes, ships, whatever. Now, we’re not buying things – we’re buying integration, and we’re using the data that we’ve collected as services, as war-fighting organizations. It’s not that we don’t know tech; there are plenty of people in the Department of Defense that know tech. The problem is, I think a lot of folks think, Well, we’ll just call 1-800-SILICONVALLEY and they’ll come and they’ll just code all this up and it’ll be fine, just like we do when we buy a system. But that’s not how it works, because this is so deeply inherent in how do you do command and control? How do you do fires? How do you do aviation? This is deep military expertise combined with technological expertise. Neither one alone can do this.
I don’t want to say we’ve turned our back, but we have disincentivized focus on real military application here. That should be a little bit disturbing to all of us. We need practical application now, so that we can imagine what practical application could look like when we’re doing it at scale.
Kelly: Let’s go back to China. Talk to me a little bit about your primary concerns regarding China and the way that they are both developing and implementing this technology that’s so critical.
Lt. Gen. Groen: Chinese universities are cranking out AI expertise and writing papers and doing all the science. They do not lack for the ability to understand the technology. But what always strikes me is that AI is an information technology. It’s enabled by the flow of information across industries that maybe are not even related, but the data is relevant. Information flow in a closed information society is inherently impaired. You cannot operate with the flexibility and the imagineering, the innovation. You can’t do that at speed if you always have to look over your shoulder to see if the party guy agrees with what you’re doing. So I think even with all the great science and the great technology that’s going on in China, there are real cultural inhibitions to applying this well.
You can see this in, for example, the Joint Swords exercises, the rehearsals for the seizure of Taiwan. Because the Chinese military is not as imaginative and they’re very rote in their actions, they have to rehearse over and over again. If you can’t think your way through it while you’re executing it, then you have to rehearse the rote methodology over and over again. And this is what we see. They’re pushing landing craft into the Taiwan Straits; they’re doing these large-scale exercises; they’re flying airplanes over the same routes that they would fly if they were going to try to seize Taiwan. So they have the technological tools at their disposal, but where they are, the maturity of their military capabilities, like command and control, and how do you do integrated fires, and how do you do long-scale movements, or tactical resupply, those kind of really important things – they don’t have that.
I would give us a voice of caution, though. We spend a lot of time thinking about, Well, they don’t have a seven-nanometer chip or they’re not pursuing two-nanometer technology. Huawei is dominating global telecommunications and they don’t need seven-nanometer chips to do that. They can do just fine with 27-nanometer chips or 52-nanometer chips. We have to be very careful not to be overconfident to say, Well, their technology is not quite as fast as ours. Technology, as long as it’s fast enough to get the job done, then it doesn’t matter. I think we Americans, especially in the Intel space and the defense space, we can become overconfident that, Hey, they’re not as good as us. They might be good enough — and that’s what we should really pay attention to.
Kelly: When you were still in the Pentagon, I know that you understood and leveraged those private-sector relationships. What do you think now that you’ve been outside as well? Do you see the landscape differently?
Lt. Gen. Groen: Yes, I do. Many of the same cultural challenges still apply. The last thing I want to do is go down the rabbit hole of defense acquisition; that ground has been covered over and over again. Yet we are not willing to make the cultural changes necessary for us to do that. But that’s one artifact. We could fix that overnight if we had the political will to do it.
Our young soldiers and sailors and airmen and guardians and Marines all have to really see this in their heads. What is their war-fighting function? I was blown away, when I was in uniform, by the expertise of functional communities. If you wanted to talk [about] how does aviation really work, you could find somebody who could talk to you for days about the nuances of the profession of aviation. Same thing [about] the profession of logistics or the profession of artillery fires. All of these things – we’re deeply steeped in that. That kind of nuance is what makes AI successful in a dirty, dangerous, chaotic environment. Even with all the data you can handle, you’re still going to have chaos, and command and control is going to be extraordinarily difficult in a fight with this kind of tempo that’s enabled by AI.
So these cultural things are so important, yet the institutions discount those – well, that’s just people’s stuff and we’ll figure it out. Yeah, we will. But better that we figure it out now, rather than some young sailors have to figure it out while there’s a Chinese missile inbound to their ship, right? It is that important that we get this technological transformation underway.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.
Related Articles
DEEP DIVE – Coverage of the U.S.-China tariff war has focused on the impact for consumers – the potential for spikes in the prices of […] More
BOTTOM LINE UP FRONT – As the U.S. faces unprecedented threats from cyberattacks, experts are warning about the impact of cuts to the nation’s cyber […] More
EXCLUSIVE INTERVIEW – It’s hard to overstate the complexity and importance of the work of the National Geospatial-Intelligence Agency (NGA), which — by its own […] More
EXPERT INTERVIEWS – Ukrainian President Volodymyr Zelensky announced this week that Ukraine plans to take an already-booming domestic drone industry and boost it to “the […] More
EXCLUSIVE CIPHER BRIEF REPORTING — The Cipher Brief was the first to report on Thursday in the weekly Dead Drop column that Director of the […] More
EXCLUSIVE INTERVIEW — One of the most profound impacts of the war in Ukraine has less to do with the frontlines and diplomatic negotiations, and […] More
Search