Skip to content
Search

Latest Stories

Welcome! Log in to stay connected and make the most of your experience.

Input clean

Congress Needs to Keep Up with AI

OPINION –The Defense Department’s unclassified investment in Artificial Intelligence (AI) has grown from $600 million five years ago, to a proposed $2.5 billion for fiscal 2021, with some 600 AI projects already underway, according to an updated Congressional Research Service (CRS) report titled, Artificial Intelligence and National Security, released last Wednesday.

“Artificial intelligence (AI) is a rapidly growing field of technology with potentially significant implications for national security,” the CRS report says, adding, “As such, the Department of Defense (DOD) and other nations are developing AI applications for a range of military functions.”


The intelligence community, mainly through the Intelligence Advanced Research Projects Activity (IARPA) has AI projects underway along with CIA. One called Finder is designed to provide tools to IC analysts to locate “where in the world images or video were taken,” said Jill Crisman, IARPA’s original program manager for Finder.

Another program is called Automated Low-Level Analysis and Description of Diverse Intelligence Video (Aladdin). It is focused on massive numbers of privately-shot video clips that are uploaded on the Internet. YouTube alone generates 100 hours of video every minute, according to the IARPA website.

The Aladin video project “seeks to combine the state-of-the-art in video extraction, audio extraction, knowledge representation, and search technologies in a revolutionary way to create a fast, accurate, robust, and extensible technology that supports the multimedia analytic needs of the future,” according to IARPA. If it’s successful, it could help find videos of those who prepared martyr statements having committed terrorist bombings or perhaps even planning ones.

AI is being employed to help the Air Force do what it calls “predictive maintenance” on such aircraft as the C-5, the KC-135 and the B-1, with plans to expand that to 12 other systems. On August 12, the Air Force published a notice seeking potential contractors who could “create and maintain a system to digitally monitor and manage the health of the Minuteman III [ICBM] weapon system.”

The Minuteman III system is currently managed through monitoring subsystems and components that involve some 2,400 items. Multiple systems and databases generate data to different teams that monitor and manage the health of the system, but “there is no current system that generates a merged health metric, actively or passively.”

What is being sought is an “Artificial Intelligence learning tool that can help synthesize data,” and also the ability to present a “visual depiction of health of the [Minuteman III] fleet.”

The Defense Advanced Research Projects Agency (DARPA) has described many unclassified programs such as the Warfighter Analytics using Smartphones for Health (WASH). It uses data collected from a soldier’s cellphone that with novel algorithms, can carry out continuous, passive, but real-time assessment of the U.S. warfighter’s health status. Perhaps in time, the technique could be used to make a judgment on an enemy military unit’s readiness to fight.

According to the CRS report, the WASH program’s current objective is the extraction of physiological cellphone signals, which it says “may be weak and noisy,” but are accumulated through mobile device sensors. My iPhone has some under “Health,” which measure your walking, climbing, height, weight, and may be able to identify latent or developing health disorders.

The CRS report warns, “AI is enabling increasingly realistic photo, audio, and video forgeries, or ‘deep fakes,’ that adversaries could deploy as part of their information operations. Indeed, deep fake technology could be used against the United States and U.S. allies to generate false news reports, influence public discourse, erode public trust, and attempt to blackmail diplomats.”

DARPA’s Media Forensics (MediFor) program is seeking to develop AI to provide automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform, according to its website. “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video,” according to DARPA’s website.

The Air Combat Evolution {ACE) program “seeks to increase trust in combat autonomy by using human-machine collaborative dogfighting as its challenge problem,” according to the DARPA website. “ACE will apply existing artificial intelligence technologies to the dogfight problem in experiments of increasing realism,” it says.

Last week, DARPA concluded its AlphaDogfight Trials, a three-day competition that demonstrates advanced algorithms capable of performing simulated, within-visual-range air combat maneuvering.

Col. Dan Javorsek, program manager in DARPA’s Strategic Technology Office said, “The goal was to earn the respect of a fighter pilot – and ultimately the broader fighter pilot community – by demonstrating that an AI agent can quickly and effectively learn basic fighter maneuvers and successfully employ them in a simulated dogfight."

Heron Systems, a small, Maryland-based, woman-owned company won the trials, defeating seven other companies’ F-16 AI agents to get to the main event. It then won 5-0, defeating an experienced Air Force F-16 pilot through aggressive and precise maneuvers the human pilot couldn’t outmatch.

As ACE continues, DARPA plans “more complex, heterogeneous, multi-aircraft, operational-level simulated scenarios informed by live data, laying the groundwork for future live, campaign-level Mosaic Warfare experimentation,” according to its website. Eventually, the goal is to shift the human role from single pilot to mission commander where one pilot is teamed with unmanned accompanying systems, as envisioned in the Air Force’s Loyal Wingman AI program.

The Air Force Research Lab years ago, completed tests of its Loyal Wingman program, which paired an unmanned F-16 with a piloted, more advanced F-35 or F-22.  The human pilot in a mixed manned-unmanned formation can issue general commands to pilotless aircraft such as attack and join the formation. But the unmanned aircraft can carry out a ground-planned attack mission such as jamming electronic threats or even delivering weapons, although an accompanying human pilot could override them.

U.S. policy does not prohibit the development or employment of so-called Lethal Autonomous Weapon Systems (LAWS) — weapons that can select, detect and engage targets with little to no human intervention. The U.S. military does not currently have LAWS in its inventory, although there are no legal prohibitions on the development of LAWS.

There is a DOD Directive (NUMBER 3000.09), updated since 2012, which sets as policy that “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

A December 19, 2019, CRS “In Focus” report said, “A growing number of states and nongovernmental organizations are appealing to the international community for regulation of or a ban on LAWS due to ethical concerns.”

This updated CRS report says, “Many experts fear that the pace of AI technology development is moving faster than the speed of policy implementation… Congress may consider debating policy options on the development and fielding of Lethal Autonomous Weapons Systems (LAWS), which may use AI to select and engage targets.”

With everything else that’s going on, I doubt either Congress or the White House is prepared to fast track this particular subject, serious as it is.

Read more expert-driven national security insight, perspective and analysis in The Cipher Brief

Related Articles

A U.S.-Philippines ‘Full-Battle Test’ Aimed at China 

OPINION — “Beijing's aggressive maneuvers around Taiwan are not just exercises – they are dress rehearsals for forced unification…Russia's growing [...] More

Trump’s Dangerous Game with El Salvador  

OPINION - “We have offered the United States of America the opportunity to outsource part of its prison system. We are willing to take in only [...] More

In Hegseth Panama Visit, Reading the 'Untranslated' Comments

OPINION — “Together with Panama in the lead, we will keep the canal secure and available for all nations through the deterrent power of the [...] More

What A U.S. Commander’s Testimony Tells Us About Russia’s War on Ukraine

OPINION — “The Russian economy has been both bolstered and distorted by this war. Specifically, the Russian government has had to turbo-charge their [...] More

If It’s Trump v. Greenland’s Leaders, I’m Betting on Greenland

OPINION — “We respect that the United States needs a greater military presence in Greenland, as Vice President Vance mentioned this evening [last [...] More

Could Trump’s ‘Golden Dome’ Lead to Nuclear Weapons in Space?  

OPINION — “The only time I can think of any history of the United States where we have gone after something this complex [President Trump’s “Golden [...] More