The Future of Artificial Intelligence

BOOK REVIEW: The New Fire: War, Peace, and Democracy in the Age of AI

By Ben Buchanan and Andrew Imbrie / MIT Press

Reviewed by James Voorhees

The Reviewer — James Voorhees is a cyber analyst with General Dynamics Information Technology. He has extensive experience as an analyst and engineer working on cybersecurity, mostly for federal agencies. He holds a PhD from the Johns Hopkins School of Advanced International Studies and is the author of Dialogue Sustained: The Multilevel Peace Process and the Dartmouth Conference.

BOOK REVIEW — Artificial intelligence (AI) is all the rage these days. For some, it is a technology that promises transformative progress. For others, the promise is dystopian, one that can bring an end to the human race.  Ben Buchanan and Andrew Imbrie address both sides in their book, The New Fire: War, Peace and Democracy in the Age of AI. Like fire, they say, AI is both ‘productive and perilous.’

Both Buchanan and Andrew Imbrie are senior fellows at Georgetown University’s Center for Security and Emerging Technology. Both have served in government. Buchanan recently served on the staff of the Office of Science and Technology Policy in the White House; Imbrie is currently a Senior Advisor in the State Department.  Both are well qualified to examine what AI means for national security and for geopolitics more generally.

In the heart of the book, they look at cybersecurity, the spread of disinformation, and lethal autonomous weapon systems (LAWS). They also examine how government and the private sector have worked together to use AI both in times of war and times of peace.

Two themes run throughout the book. The first is the question of whether autocracies are better able to harness AI than democracies. The second is examing whether China has an advantage over the West when it comes to AI?

The authors describe the competing views of three groups in answering these questions. They include evangelists (those who see no downside to AI if well-managed), Cassandras, (who, like their Homeric namesake, predict disaster) and warriors, who might see both sides of AI, but believe it must be harnessed to a national security wagon.

The book begins with descriptions of the technology, its history, and the people who made it. It looks at the triad that comprises AI–data, algorithms, and computing power—using projects that showed that machines could beat humans at games. Chess, the Chinese game of Go, and the video game StarCraft all came before the onslaught of AI.


Today’s constant barrage of information makes it easy for countries to wage disinformation campaigns and your emotions are the weapon of choice.  Learn how disinformation works and how we can fight it in this short video.  This is one link you can feel good about sharing.


The authors also look at how AI fails. Because AI is only as good as the data it gets, bias in data can produce bias in outputs. Amazon discovered this when it used its own data for hiring. The AI program recommended men overwhelmingly over women, despite efforts to avoid gender bias.  

Opacity is another problem: it is often unclear why AI produces the results it does. This reduces the confidence one can have in those results, especially when those results can have dire, even life-threatening, consequences. A third problem is ‘specification gaming’; when people specify what needs to be done, they often make implicit assumptions that the machines can know nothing about. Mickey Mouse’s failure when instructing the broom in The Sorcerer’s Apprentice segment of Fantasia is a ready example of the mayhem that can result.

In the rest of the book, the authors carefully weigh the promise of AI against its perils as seen by evangelists, Cassandras, and warriors.  Many evangelists, they find, have an anti-military, anti-national security bias. Google found this out after they began work with DoD on Project Maven. Other firms have also hesitated to work with the Pentagon for similar reasons. The Chinese Communist Party, in contrast, can compel any company to work with the state.  

Lawmakers have raise concerns about the risks that AI-driven machines could kill people through error. There is also the moral question of whether machines should be able to decide on actions that would kill people with no human in the loop. This argument is largely between Cassandras and warriors. The former argue that the risks cannot be managed; the latter argue that they can. They also argue that China, as an autocracy, is little constrained by moral concerns and will gain an advantage if it develops these weapons and the United States does not.

In their discussion on cybersecurity, the authors linger on the offensive side. They show what attackers can do using AI, which is formidable. It can be used to find the vulnerabilities to attack, automate attacks, and deceive defenders. It can attack machine learning itself. The authors don’t say it, but Cassandras should argue that AI-based attacks can have dire, unforeseen consequences. The NotPetya malware, which brought down the British National Health Service and the Maersk shipping company is an example of what could happen. Much of what passes for AI in cyber-defense is hype, but more is being done than the book describes. Moreover, AI is becoming necessary for cyber defense, given the growing amounts of data that defenders have to manage and the acceleration of the speeds at which computers operate.  

AI has been essential to the spread of social media platforms like Facebook and Twitter. It has also enabled the accompanying the spread of misinformation and disinformation and diminution of privacy with which we have become familiar. It has made life online, dystopic to many. The technology improvements that are making deep fakes look even more real only promise to make the problems worse. Solutions are not obvious.

The authors give even-handed answers to their thematic questions—whether autocrats or democracies benefit most from AI and whether the Chinese have an advantage over the West.  Their assessment of the limitations facing Chinese President Xi Jinping does not go far enough. In my opinion, he faces hurdles that could easily prevent China from becoming the innovator he commands it to be. The changes they recommend for democracies would add to the advantages that democracies already have when it comes to AI. Some of their recommendations however, for changes in education and immigration the United States, have been tried before. Others are simply not practical politically.

Much of the current commentary about AI, democracy, and China is alarmist. Buchanan and Imbrie provide a necessary corrective, setting out the potential and the perils of AI without bending into Pollyannish optimism.

Read Under/Cover interviews with authors and publishers in The Cipher Brief

Interested in submitting a book review?  Check out our guidelines here

Sign up for our free Undercover newsletter to make sure you stay on top of all of the new releases and expert reviews

Disclaimer – The Cipher Brief participates in the Amazon Affiliate program and may make a small commission from purchases made via links.

Read more expert national security perspectives and analysis in The Cipher Brief


More Book Reviews

Search

Close