Bottom Line Up Front
- There is a race between great powers to develop the most cutting edge and sophisticated approach to harnessing the promise of artificial intelligence.
- The U.S. has already demonstrated the utility of using AI in counter-terrorism operations, as evidenced by the success of Project Maven.
- America’s competitors are also using AI, but mostly for internal security operations and as a means to monitor and surveil their domestic populations, as the Chinese have demonstrated already.
- Non-state actors and terrorist groups could use AI in a number of ways, from social network mapping, to AI-enabled drone swarming, to the automation of social engineering attacks.
There is a race between great powers to develop the most cutting edge and sophisticated approach to harnessing the promise of artificial intelligence (AI). Countries including the United States, Russia, and China have invested billions of dollars into becoming the global leader in AI through research and development in the creation of autonomous systems. China has openly stated that it seeks to become the global leader in artificial intelligence by 2030. While there is significant potential in the future of AI, there are also serious legal, policy and ethical questions that have yet to be fully considered. The race to lead the world in AI development goes hand in hand with related technologies and processes including quantum computing, machine learning, human-computer interaction, blockchain and big data analytics. Almost all major countries and their militaries have developed, or are in the process of developing, national strategies for artificial intelligence.
The United States is seeking to leverage artificial intelligence to complement existing warfighting capabilities and combat support activities, including logistics, cyberspace operations, information operations and command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR). While some associate AI with the use of robots and autonomous weapons in a science fiction-like futuristic war scenario, various applications are already in use. For example, Project Maven, also known as the Algorithmic Warfare Cross-Functional Team, uses artificial intelligence to help support the processing, exploitation, and dissemination of critical information that enables commanders to gain greater situational awareness to improve decision-making. In Iraq and Syria, Project Maven has helped U.S. Special Operations Command (USSOCOM) identify objects, including people, vehicles and infrastructure, in videos relayed from ScanEagle drones.
The United States will not be the only player in this arena. Russian President Vladimir Putin has gone on record as saying, ‘Artificial intelligence is the future, not only for Russia, but for all humankind.’ Furthermore, he noted, ‘It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.’ Two years ago, Russia created the Foundation for Advanced Studies, intended to be Moscow’s equivalent of the Defense Advanced Research Projects Agency (DARPA). Some authoritarian states like China are already using AI, mostly for internal security, to more closely surveil domestic populations. China has over 200 million surveillance cameras throughout its territory, and plans to use AI to track the travel, personal communications and internet usage of its citizens, relying on AI to help build a high-tech police state. On the battlefield, artificial intelligence is viewed as a force multiplier for conventional militaries attempting to close the gap between themselves and stronger powers, enabling faster and more reliable decision-making during conflict.
Experimenting with machine learning technology and AI will not simply be the writ of nation-states. Terrorists and other non-state actors will also seek to exploit emerging technologies to be used toward nefarious ends, as they already have with end-to-end encryption, social media, virtual currencies and unmanned aerial systems, or drones. As Daveed Gartenstein-Ross has noted, terrorists could use AI in any number of ways, including social network mapping, AI-enabled swarming as a force multiplier for drones and the automation of social engineering attacks to enhance extortion plots used to help fund their organizations. Unlike using small drones, the barriers to entry for competing in the AI space are significantly higher, meaning that for now, terrorist groups and other non-state actors remain at a disadvantage. Counter-terrorism specialists are also relying on AI, with some programs already in use to help aid the process of identifying and removing terrorism-related content from the web, an issue that continues to evolve as public-private partnerships are formed and national governments pressure technology companies to become more active in the fight against radicalization and extremism online.