Preparing Today for Tomorrow’s AI Wars

BOOK REVIEW: ALGORITHMS OF ARMAGEDDON: The Impact of Artificial Intelligence On Future Wars

By George Galdorisi and Sam J. Tangredi/U.S. Naval Institute Press

Reviewed by:  Glenn S. Gerstell

The Reviewer — Cipher Brief Expert Glenn S. Gerstell is a Principal with the Cyber Initiatives Group and Senior Adviser at the Center for Strategic & International Studies.  He served as General Counsel of the National Security Agency and Central Security Service from 2015 to 2020 and writes and speaks about the intersection of technology, national security and privacy.

REVIEW — “If current trends continue, the future AI leader – and specifically the leader in military applications – will not be the United States.”  That assertion is the compelling motivation for Algorithms of Armageddon: The Impact of Artificial Intelligence on Future Wars (Naval Institute Press, 2024), written by two well-regarded national security futurists, George Galdorisi and Sam J. Tangredi. There are ethical and other complex issues counselling caution in deploying military AI, and yet there may be no secure alternative, because in the authors’ words:

We don’t want to have to face an algorithm of Armageddon. But we don’t want other nations to be able to use military AI to control us.

That is the dilemma we as Americans — along with our democratic allies – need to discuss and need to resolve.

Their excellent book is intended “to provide a detailed and impartial picture of the current state and potential evolution of military applications of AI.”  Given the technical complexity of the topic, it succeeds in that goal in a surprisingly readable way. The first three chapters are a useful explanation of big data, artificial intelligence, machine learning and autonomous systems, with a historical overview of how AI developed and in particular how it is being deployed by Russia and China. While that will all be familiar to experts, it’s a welcome addition to the explanatory literature in this area. The middle, and most alarming, part of the book is a discussion of how AI is being weaponized; it considers whether autonomous AI-empowered weapons can truly remain under human control. The final chapters speculate, in a disturbingly realistic way, how an AI-assisted or AI-directed global World War III might ignite and conclude.

In the course of this discussion, the authors reveal the ambivalence that often pervades discussions in this area: On one hand, the book is deliberately “a frightening polemic,” making it clear that AI can be misused, especially by authoritarian governments in irresponsible military applications without human governance. And on the other, it’s an exhortation for even more. The authors fear that “U.S. decision-makers [do not] fully realize that…we are already thrust into an AI race with the authoritarian powers” and that we have no choice but to win this race, no matter how pernicious the end might be.

Former U.S. naval officers, the authors are clearly knowledgeable about current weapons platforms. They explain some of the military’s current uses of AI-enhanced or governed weapons, noting that, in their view, the ideal human-machine combat team might look like an AI-enabled MQ-4C Triton air vehicle that could be used for remote scouting for a carrier strike group. Such a vehicle, supported and guided by human decision-making, would provide more accurate and quicker reconnaissance than any currently deployed manned aircraft. But as the current Ukraine drone war reveals, the “genie is already out of the bottle” (as the authors put it), since machine-on-machine warfare, with little or no real-time human intervention, is already upon us.


Looking for a way to get ahead of the week in cyber and tech?  Sign up for the Cyber Initiatives Group Sunday newsletter to quickly get up to speed on the biggest cyber and tech headlines and be ready for the week ahead. Sign up today.


The book sketches out what a war between dueling machines might look like, opening with an AI-initiated massive missile attack by the People’s Republic of China on US naval forces in the Pacific. Under alternative scenarios, the attack could succeed or be thwarted with defensive AI. Yet it would of course be ideal if such an AI versus AI conflict could be avoided in the first place – the authors’ preferred course of action is the adoption of a global treaty limiting the first-strike use of AI-weapons and requiring human control over lethal weapons. A laudable, but unlikely, or at least distant, solution.

The authors urge the public and policymakers to recognize five points, in their words:

  1. AI makes tyranny more effective.
  2. There is not going to be a global consensus concerning AI.
  3. Arms control treaties cannot effectively constrain AI
  4. Silicon Valley seems somewhat ambivalent on defense issues which means they are somewhat ambivalent about our national security.
  5. The United States will lose the military AI race without a direct, significant effort.

Even though the book is written in 2023, its description of Silicon Valley’s ambivalence seems a bit dated or at least exaggerated. Private equity is now eagerly backing start-ups that want to sell innovative technology to the Pentagon; the reluctance exhibited by some larger tech companies to aid the Defense Department abated, especially after the invasion of Ukraine; more broadly, “industrial policy,” at least in the defense sector, is back in vogue (as exemplified by the CHIPS and Science Act); and the Pentagon itself is (slowly) attempting to reform its acquisition procedures to obtain new technologies at a scale and speed that will be meaningful.

Moreover, sophisticated discussions are occurring at the highest levels of government over the use of AI. Indeed, President Biden’s recent 35-page Executive Order 14110 establishing guidelines for the government’s use of AI is an important step and will spur more public dialog during the rulemaking process mandated by the Executive Order. Finally, notwithstanding the authors’ fears that our military vulnerabilities are going unnoticed, policymakers in the Pentagon and Congress, as well as outside experts, are sounding the alarm and starting to take action.

But these are quibbles over degrees of emphasis; overall the book is surely correct in that the American public and its leaders need to develop a deeper understanding of the risks posed by military AI and to take steps to preserve our dominance in that area – lest our national security be at risk from future algorithms of Armageddon.

Algorithms of Armageddon earns a prestigious 4 out of 4 trench coats

4

Are you Subscribed to The Cipher Brief’s Digital Channel on YouTube?  Watch The Cipher Brief’s interview with CIA Director Bill Burns as he talks about The Middle East, Russia, China and the thing that keeps him up at night.

The Cipher Brief participates in the Amazon Affiliate program and may make a small commission from purchases made via links.

Interested in submitting a book review?  Send an email to [email protected] with your idea.

Sign up for our free Undercover newsletter to make sure you stay on top of all of the new releases and expert reviews.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.

Buy


More Book Reviews

Search

Close