A New Warfighting Paradigm

By Walter Pincus

Pulitzer Prize Winning Journalist Walter Pincus is a contributing senior national security columnist for The Cipher Brief. He spent forty years at The Washington Post, writing on topics that ranged from nuclear weapons to politics. He is the author of Blown to Hell: America's Deadly Betrayal of the Marshall Islanders. Pincus won an Emmy in 1981 and was the recipient of the Arthur Ross Award from the American Academy for Diplomacy in 2010.  He was also a team member for a Pulitzer Prize in 2002 and the George Polk Award in 1978.  

OPINION — “Americans have not yet seriously grappled with how profoundly the AI (Artificial Intelligence) revolution will impact society, the economy, and national security,” is one of the opening sentences of the draft final report from the National Security Commission on Artificial Intelligence that is now being publicly circulated for comment.

Set up by Congress in 2018, the Commission was tasked “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”

Chaired by Eric Schmidt, the former CEO and chairman of Google, with former Deputy Defense Secretary Robert Work as vice chairman, one of the Commission’s studies has been of the “risks associated with United States and foreign country advances in military employment of artificial intelligence and machine learning, including international law of armed conflict, international humanitarian law, and escalation dynamics.”

Finding that “AI-enabled systems will likely increase the pace and automation of warfare across the board,” one of the Commission’s most dramatic recommendations is for American leadership to “clearly and publicly affirm existing U.S. policy that only human beings can authorize employment of nuclear weapons and seek similar commitments from Russia and China.”

The Commission added, “The United States should make a clear, public statement that decisions to authorize nuclear weapons employment must only be made by humans, not by an AI-enabled or autonomous system, and should include such an affirmation in the DoD’s next Nuclear Posture Review.”

Along with Russia and China, the U.S. should also press other nuclear states, meaning the U.K., France, India, Pakistan, Israel and North Korea, to issue similar statements. However, the Commission recognized “political commitments that only humans will authorize employment of nuclear weapons would not be verifiable, they could still be stabilizing.”

The Commission did recommend, “The United States should actively pursue the development of technologies and strategies that could enable effective and secure verification of future arms control agreements involving uses of AI technologies. Although arms control of AI-enabled weapon systems is currently technically unverifiable, effective verification will likely be necessary to achieve future legally binding restrictions on AI capabilities.”

While the Commission recognized “controlling the proliferation of AI-enabled and autonomous weapon systems poses significant challenges given the open-source, dual-use, and inherently transmissible nature of AI algorithms.”

The use of AI in weapon systems has raised issues about whether such systems are lawful, safe, and ethical. The United Nations Convention on Certain Conventional Weapons (CCW) has, since 2014, held meetings on “emerging technologies in the area of lethal autonomous weapon systems.” Discussion has focused on whether they fit within the law of armed conflict or if additional measures are needed to assure that humans maintain control over the use of force.

Critics have argued that limits should be placed on AI-controlled weapons systems, but the Commission said it “does not support a global prohibition of AI-enabled and autonomous weapon systems.” It believes “existing DoD procedures are capable of ensuring that the United States will field safe and reliable AI-enabled and autonomous weapon systems and use them in a manner that is consistent with IHL (International Humanitarian Law also referred to as the law of armed conflict).”

But attention to AI as related to nuclear and other weapons should not be the main take-away from the Commission report.

I have read a fair amount about AI, and written a few columns about it, but this report being released March 1, makes clear not only the significance of the AI impact in the coming years, but also its role in changing human existence.

The Commission’s draft report notes Thomas Edison’s 1901 prediction that the light bulb and electricity represented a “field of fields” that held “the secrets which will reorganize life of the world.” It adds that although “AI is a very different kind of general-purpose technology…we are standing at a similar juncture and see a similarly wide-ranging impact.”

The draft report describes that the “rapidly improving ability of computer systems to solve problems and to perform tasks that would otherwise require human intelligence is transforming many aspects of human life and every field of science. It will be incorporated into virtually all future technology. The entire innovation base supporting our economy and security will leverage AI. How this “field of fields” is used—for good and for ill—will reorganize the world.”

AI, the report says, has helped predict the spread of COVID-19, it has made it possible to speed up drug and therapeutic discoveries to combat the pandemic and is “compressing innovation timescales in other disciplines, turning once fantastical ideas in areas like biotechnology into realities.” On a more practical level, AI is monitoring traffic flow, safety and automating routine manufacturing and office functions.

In the national security area, AI is providing what the report calls “a new warfighting paradigm” in which America’s competitors, like Russia and China, are making substantial investments. The report predicts that in the future battlefield, “Advantage will be determined by the amount and quality of a military’s data, the algorithms it develops, the AI-enabled networks it connects, the AI-enabled weapons it fields, and the AI-enabled operating concepts it embraces to create new ways of war.”

It is not as if there is no movement to integrate AI into today’s military. The Joint Artificial Intelligence Center (JAIC) was established in 2018 as the focal point of DoD AI strategy. In 2019, it developed two initiatives covering predictive maintenance and humanitarian assistance and relief programs. At the Defense Advanced Projects Agency (DARPA) there are a broad number of AI research programs underway under a $2 billion “AI Next” program begun in 2018. They dealt originally with business processes, security vetting and accrediting software programs, but there has been substantial progress since then. For example: last September’s DARPA test of an Air Force rapid software integration tool that was developed to assist a mission commander in rapidly identifying and selecting options for tasking across military domains and services in order to hit incoming targets.

Nonetheless, the Commission said, “Today’s DoD is trying to execute an AI pivot, but without urgency.” It complained that the Pentagon “remains locked in an industrial age mentality in which great-power conflict is seen as a contest of massed forces and monolithic platforms and systems” at a time when “the speed of digital transformation punctuates the risk of not pivoting fast enough.”

The report said, “AI will transform the way war is conducted in every domain—from undersea to outer space, as well as in cyberspace and along the electromagnetic spectrum.” Such elements as “strategic decision-making, operational concepts and planning, tactical maneuvers in the field, and back-office support,” will be affected. “AI-enabled micro-targeting, disinformation, and cyber operations…will reshape many attributes of war—such as its speed, tempo, and scale; the relationships service members have with machines; the persistence with which the battlefield can be monitored; and the discrimination and precision with which targets can be attacked,” according to the report.

Faced with those facts, the report recommends, “The Department must act now to integrate AI into critical functions, existing systems, exercises and war games to become an AI-ready force by 2025.” That would mean, “Warfighters enabled with baseline digital literacy and access to the digital infrastructure and software required for ubiquitous AI integration in training, exercises, and operations.”

The Commission even proposed some structure changes. It said DoD should create a “Steering Committee on Emerging Technology, tri-chaired by the Deputy Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and the Principal Deputy Director of National Intelligence,” and make sure “the JAIC Director remains a three-star general or flag officer with significant operational experience who reports directly to the Secretary of Defense or Deputy Secretary of Defense.”

One other Commission suggestion: An AI Operational Advocate, expert in AI systems, be assigned to the staff of every Combatant Command “to advise the commander and staff on the capabilities and limitations of AI systems, and identify when AI-enabled systems are being used inappropriately.”

Read more expert-driven national security perspective, insight and analysis in The Cipher Brief

 


Related Articles

Israel Strikes Iran

BOTTOM LINE UP FRONT – Less than one week after Iran’s attack against Israel, Israel struck Iran early on Friday, hitting a military air base […] More

Search

Close