Skip to content
Search

Latest Stories

cipherbrief

Welcome! Log in to stay connected and make the most of your experience.

Input clean

Guns Don’t Kill People…And Neither Do Robots (Yet).

By Katrina Manson / W.W.Norton & Company

Reviewed by: Neal A. Pollard


The Reviewer – Neal A. Pollard is a partner at Control Risks Group and was the lead cybersecurity executive for a global Swiss bank. Prior to joining the private sector in 2011, he spent 18 years in the US counterterrorism community, as a defense contractor and an intelligence officer. In 1996, he co-founded a counterterrorism corporation, sold in 2006 to Blackwater’s holding company. He is working on his first novel “Ordinary Spies,” a story of Silk Road gastronomy and nuclear terrorism.

REVIEW: Give me a Barrett sniper rifle, and I still couldn’t hit the broad side of a barn. Give a US Marine sniper a Barrett, and you have an altogether different level of lethality. But make no mistake – the talent is in the Marine, not the rifle.

Katrina Manson’s Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare is a vivid, well-researched account of how artificial intelligence (AI) is making Marines and other American warfighters more lethal. It isn’t a book about sentient machines deciding whom to kill. Rather, it tells a story about Defense Department bureaucracy, software, contractors, battlefield frustration, and the institutional drive to make war faster, scalable, and safer for the American warfighter. It is a definitive history of how AI is embedding into the operational machinery of “finding, fixing, and finishing” targets.

Manson builds the book around Drew Cukor, a Marine intelligence officer who became the animating force behind “Project Maven.” Maven is a Defense Department program that began in 2017 as an effort to use AI to analyze drone and satellite imagery, and evolved into something much larger: an integrated system for assembling and analyzing data, identifying targets, pairing them with weapons, and speeding the “kill chain” from detection to destruction. Cukor is an ideal protagonist, because he is both insider and rebel: a believer in American military power and critic of its bureaucracy.

Thus, Manson’s compelling narrative begins not with killer robots but with a more mundane and persuasive grievance: America’s wars were being fought with terrible information tools. In Afghanistan and Iraq, Cukor lived and fought on a battlefield (carrying around a computer server that doubled as a space heater) where intelligence was fragmented across Excel, Word, PowerPoint, maps, and improvised workarounds. One of the book’s best lines, quoting an Army artillery officer, captures the absurdity: “We’ve killed more people on Office than you’d ever imagine.” Before AI became a moral panic, war was already being run through clumsy administrative systems whose deficiencies had lethal consequences for the US warfighter. Cukor’s priority is clear: to collapse the distance between information and action. He sees his and America’s moral obligation equally clearly: to use data better, to protect US warfighters and make them better at their jobs.

That gives the book its first theme: war as a data problem. Manson presents Cukor and his colleagues as driven not by operational failure alone — friendly-fire errors, missed signals, repeated relearning of local conditions — but by the inability of American forces to turn collected data into battlefield understanding. A second theme is bureaucratic insurgency. Manson describes Cukor deftly navigating the Pentagon not merely as a military department but as a budgetary and organizational system that rewards inertia, hardware, and process over speed, software, and experimentation. Project Maven, in her telling, is not just a technical initiative but a campaign to reconfigure how the U.S. military thinks about data, intelligence, targeting, and acquisition.

A third theme is the fusion of Silicon Valley and the national security state. Manson traces how Cukor’s drive to modernize military intelligence and targeting helped pull commercial AI and data firms deeper into defense work. Palantir, Amazon Web Services, Microsoft, Nvidia, Anthropic, and OpenAI all appear as part of an ecosystem shaped by Pentagon demand. Her point is not simply that technology firms now work with the military, but that war has become a major venue in which commercial AI finds use, legitimacy, and funding.

Manson’s reporting is deep, drawing on over two hundred insiders and opponents, as well as internal documents, emails, notes, and other nonpublic materials. The book feels grounded in institutions rather than in drama. Its narrative structure organizes the story around the logic of the targeting cycle—find, fix, finish, feedback—which keeps a complicated subject legible without flattening it into slogan or prophecy.

The book risks overstating the novelty of AI-augmented warfare. At least since man turned a stick into a club, militaries have used new technologies to make humans better at killing. In that sense, Maven is not a historical rupture, just another chapter about improving the speed and precision of violence. The maverick-versus-bureaucrat trope also occasionally shows through too clearly: visionary operators in the field, calcified troglodytes in headquarters, and eventual triumph through grit and ingenuity.

Dismissing Maven as merely “more efficiency” would miss what is genuinely unsettling in Manson’s account. Its narrative strength lies in its uncertainty – like that of many of its interviewees – whether AI in warfighting will ultimately be a good or bad thing. If AI systems can dramatically increase the speed at which data is sorted, patterns are recognized, and targets are generated, then the practical meaning of human judgment changes. A human may remain in the loop formally while becoming, in practice, a validator of machine-assisted workflows operating at industrial speed.

This is where the moral pressure lies, and the book captures well two overlapping debates behind it. The first is whether commercial AI should be used to support lethal military operations at all. The Google employee revolt against Maven in 2018 is an example she uses effectively to show the discomfort inside technology companies about becoming instruments of war. Google eventually embraced its role, but the recent drama surrounding Anthropic and the US Government’s view of it – from DoD supply chain risk to indispensable defender of national cybersecurity – shows the lack of consistency even among senior policymakers.

The second moral debate is whether AI should merely augment human decision-making or eventually replace it in national security. Manson gives readers something better than Hollywood for understanding the distinction. Autonomous weapons are easy to dramatize through The Terminator or WarGames, both mentioned as what Maven is not. What Maven is, is harder to define, precisely because the practical line between AI as a tool, and a killer itself, isn’t defined in the gray space where AI assists targeting, narrows choices, and accelerates decisions without necessarily becoming the final trigger-puller.

That undefined line is central to the book. Manson shows genuine debate among officers over whether Maven is even a weapons system, let alone one that might eventually remove the human from the loop. Manson quotes a Navy officer saying, “I think if you’re going to be making decisions through engagement, soldiers should be trained as if [Maven] were a weapons system.” Hard to disagree, but that doesn’t make it a weapons system. That’s the unresolved question of the book: are Maven and its progeny weapon systems?

One of the strongest themes running through the book is the insistence – sometimes earnest, sometimes uneasy – that a human always remains involved in the decision to kill someone on the battlefield. Cukor and his colleagues frame Maven’s purpose not as “Can we kill more people?” but as reducing unacceptable losses among American warfighters and civilians. That sharpens the moral debate, but does not clarify the appropriate balance in using AI for national security. Manson quotes one General concerned AI might lead to more harm: “It just depends on how humans decide to use it.”

To me, that distills the lesson of the book: the danger of AI weapons systems is human, but which humans: technologists, operators, or policymakers? The moral argument for Maven is clear: it saves American lives. Ironically, that’s also the moral hazard. The unacceptability of Americans dying in combat is what gives our political leaders pause before sending warriors into harm’s way. The book points out, AI might cause political leaders to have unfounded confidence this is less a risk, encouraging less circumspection before going to war, or distancing warriors from the decisions or consequences of killing other humans, turning them into technicians and desensitizing them to the horrors of war.

This recalls GEN Robert E. Lee’s attributed quote “it is well that war is so terrible, otherwise we should grow too fond of it.” That’s the worst part of AI. It removes the human from the emotional self-accountability of a bad decision. AI might know when to pull the trigger. Will it know when not to?

The Cipher Brief participates in the Amazon Affiliate program and may make a small commission from purchases made via links.

Sign up for our free Undercover newsletter to make sure you stay on top of all of the new releases and expert reviews.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business

Project Maven

Project Maven earns a prestigious 4 out of 4 trench coats

Buy More Book Reviews