As Government Leans in on AI, Security is Big Concern

By Ken Hughes

Senior Cyber and Technology Editor, The Cipher Brief

CIPHER BRIEF REPORTING – As U.S. Government Executives are working to implement the Biden Administration’s new Executive Order on Artificial Intelligence, (AI) they are not only grappling with the global impact of the world-changing technology but they are doing it while facing a range of threats to U.S. businesses and national security, threats predominantly originating from China.

“The breakneck speed at which AI is evolving and spewing out innovations,” noted CISA Executive Director Eric Goldstein during the Cyber Initiatives Group Winter Summit this week, brings with it “some real risks that we are going to unlearn some of the lessons of the past few decades” on how to securely develop and deploy software. 

The U.S. is leading a global effort of government agency partners including the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), the National Security Agency (NSA) and agencies around the world, to create a baseline to “develop, design, maintain, and deploy AI systems”, according to Goldstein, “in the safest way possible”.  That means that rather than loosening security standards for AI, as some have suggested, the technology demands “a greater level of scrutiny, security, and control” to ensure the risks of unauthorized access or use are effectively managed. 

FBI Deputy Assistant Director for Cyber Cynthia Kaiser agrees, saying at the same summit that the Bureau is using AI in two distinct ways.  First, is the familiar defensive task of fending off cyber threats, an area of risk that has only been made more potent and sophisticated by the capabilities AI brings to the table.  But there’s another protective facet that AI conjures up, and that is shielding American AI innovators from the ever-present danger of cyberespionage and intellectual property theft. 

The greatest threat, according to the experts, is posed by China, both as a competitor in AI research and development and in stealing U.S. technology secrets.  According to Kaiser, “what we’re worried about is that next step that’s coming and how do we defend against it.”


Looking for a way to get ahead of the week in cyber and tech?  Sign up for the Cyber Initiatives Group Sunday newsletter to quickly get up to speed on the biggest cyber and tech headlines and be ready for the week ahead. Sign up today.


AI’s unique ability to turbocharge malicious uses and to springboard from present threats to even greater perils, clearly concerns the FBI.  The idea of “destructive attacks becoming better,” Kaiser noted, in the wake of already alarming incidents like the Volt Typhoon network implants, or the even more recent probing of U.S. water utilities by Iranian threat actors, is an issue of “scaling up” that is unique to AI. 

Fortunately, experts note that there is a “defenders’ advantage” in the deployment of AI cybersecurity solutions, such as writing and testing protective codes at much greater speed or employing AI to automate certain system monitoring functions to free up time and reduce expenses.  But, as Goldstein said, the balance of that advantage is “very fragile” with the expectation that adversaries will continue unabated with their own development and evolution of capabilities, often unrestrained by ethical safeguards found in the West.  Goldstein points out that today’s current advantage could be squandered “if we don’t design and deploy AI systems themselves securely.”

Kaiser addressed the executive order’s requirement for all Federal agencies to develop internal guidelines and safeguards on the use of AI, a topic uniquely sensitive at the FBI.  In response to this mandate, Kaiser noted that the FBI has already created an AI ethics council to ensure that AI is employed in ways “that preserves the legal process, that preserves privacy among Americans…things that are really important to us at the FBI to protect.”

As for other elements of the executive order that are of keen interest at CISA, Goldstein pointed to three aspects:  first, CISA’s coordinating role with Sector Risk Management (SRM) agencies to conduct risk assessments and provide guidance to each critical sector; second, is the mandate to use AI systems to accelerate the detection of vulnerabilities on Federal networks; and third, CISA’s role in providing red teaming guidance for AI systems so that rigorous testing will lead to an understanding of weaknesses and how an adversary might exploit them.

Read more expert-driven national security insights, perspectives and analysis in The Cipher Brief

Categorized as:Reporting Tech/CyberTagged with:

Related Articles

Israel Strikes Iran

BOTTOM LINE UP FRONT – Less than one week after Iran’s attack against Israel, Israel struck Iran early on Friday, hitting a military air base […] More

Search

Close