The explosion of data in a digital world has exposed people to volumes of information as never before. But just as this data will increase exponentially with time, so too will the number of questions that can be posed against these expansive datasets. To find insight buried in this immense volume of data, manpower alone will not suffice.
Robert Griffin, the CEO of Ayasdi, told an audience at The Cipher Brief’s Annual Threat Conference in Sea Island, Georgia that “the challenge is that you have to be able to sort through the noise.”
Artificial intelligence can use predetermined algorithms to sift through the troves of data , but in order to build such machine learning systems, Griffin argues that AI must be capable of learning through observation to predict the future, transparently justify why it came to the conclusions it did, and react accordingly.
But how far along is the development of artificial intelligence?
“Today,” Tim Estes, the Founder and President of Digital Reasoning, told the room “we actually have better than human level understanding of classifying images” and effective audio and text analysis are on the horizon. “We are starting to see that computers are able to take very limited training and be able to figure out the way things can work in multiple languages with no human tagging of a given language.”
A remaining complication is that data is hardly ever aggregated in one place, particularly by intelligence agencies that often compartmentalize in order to minimize against insider threats. “We need to get beyond aggregated data, to a model which can learn the knowledge in data, and transfer that knowledge safely without having to move the data around,” Estes said. This could help alleviate stovepipes while also mitigating the risks of major security breaches.
The potential artificial intelligence holds for understanding national security threats is palpable, but Doug Wise, the former Deputy Director of the Defense Intelligence Agency and a Cipher Brief Expert took the concept one step further.
Wise told the Threat Conference audience that “the warfare of the future will be multi-dimensional, multi-domain, all engaging, with intelligence collection happening simultaneously as kinetic action. Some of the weapons will be manned, some of will be automated and remotely operated – like the MQ-9 – but the majority of them will be self-operated and self-aware.”
In his view, there is a strategic necessity to incorporate AI technology for military – perhaps even lethal – applications. The reasoning can be found at the foundation of Deputy Secretary of Defense Robert Work’s Third Offset strategy. Much like tactical nuclear weapons and precision guided munitions set the United States apart from near-peer adversaries in decades prior, advances in artificial intelligence are intended to regain technological superiority over near-peer powers like Russia and China.
These autonomous weapons and reconnaissance systems will be operating in all domains of war – air, land, sea, space, cyber space and the electromagnetic spectrum. “All of those platforms, hundreds if not thousands of them, have to be functioning with some degree of unity of effort and simultaneously,” said Wise.
Moreover, given that these weapons could be operating in contested space – for example, en route to strike inland North Korea – they could be susceptible to disruptive cyber and electronic warfare capabilities. If they are not given a level of autonomy to continue their predetermined mission despite a severed remote-control link, they will be useless in a conventional war of the future.
But should a machine be allowed to operate without human involvement? It may have to, according to Wise. Should adversaries seek to leverage similar lethal autonomous systems, decisions will need to be made at the speed of light – something the human brain cannot feasibly comprehend. How does this affect traditional concepts of battle management? “From the Department of Defense standpoint, what we need is a virtual Clausewitz, Marshall, and Patton to really provide the battlefield command and control,” said Wise.
Perhaps most importantly, in Wise’s forecast, these autonomous weapons systems would have to align with U.S. foreign policy – and American values. “How do you code morality into an autonomous or self-aware machine?” Wise asked.
Codifying such concepts of values and morality into weapons systems will not only require a deep understanding of how this will impact this technology, but will also demand difficult answers to questions about ourselves.
Levi Maxey is a cyber and technology analyst at The Cipher Brief. Follow him on Twitter @lemax13.