BOTTOM LINE UP FRONT – Chinese President Xi Jinping and U.S. President Joe Biden agreed in a meeting held in San Francisco this week, that Artificial Intelligence (AI) poses serious risks when used in military or nuclear operations.
While the two leaders fell short of a mutual declaration on the use of AI, their agreement over risks posed to national security is noteworthy and comes just weeks after President Biden issued an Executive Order on ‘Safe, Secure and Trustworthy’ AI with one of the centerpieces of the order focused on requirements that are shaping the ways that federal agencies design, acquire, use and manage AI systems.
With the U.S. functioning as a democracy and China as a Communist regime, China has the upper hand on how closely tied its government and commercial sectors are when it comes to AI development and implementation.
Cyber Initiatives Group Principal member and former General Counsel for the National Security Agency, Glenn Gerstell, says he believes that commercial firms in the U.S. recognize that “we can't deal with artificial intelligence on our own. We need the government's help. The challenge will be to find the right level of regulation to prevent harm while not stifling innovation." And Gerstell says that feeling is reciprocated in government. “I think the combination of technology and geopolitical developments has brought about a convergence of interests, not totally, but brought about some convergence of interests, between the government and the private sector in a way that just didn't exist a decade ago.”
The Cipher Brief reached out to a number of experts, many from The Cyber Initiatives Group, to get a broad sense of how cyber professionals are digesting the new Executive Order and what it means for the use of AI moving forward. But first, some context.
THE CONTEXT
- Overall, the EO is seen as building on previous voluntary agreements with major tech companies that committed, among other things, to opening their algorithms to security testing and sharing information across the industry prior to launch.
- Social protections against AI excesses also figured prominently in the EO, including tasking the National Science Foundation with propelling AI technologies that feature “privacy by design” rather than reactionary privacy measures that follow a disaster.
- Another key move by the administration was its use of the Defense Production Act to mandate red-team safety reporting by commercial AI developers in their training of “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety.”
- Missing from the EO was detailed guidance to the defense and intelligence communities, but that doesn’t mean they’re left out. The EO notes that a National Security Memorandum will follow the October directive in order to “ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.”
- Privacy concerns also factor into the funding of a Research Coordination Network to strengthen cryptography tools, and the strengthening of guidance to Federal agencies on the use of private information obtained from data brokers.
- Loosening visa requirements for highly skilled immigrant workers is a key EO provision aimed at promoting U.S. competitiveness and technological advances in AI development.
- Boosting public confidence in government communications through Federal development of content authentication and watermarking methods to defeat deepfakes and other deceptive materials will be led by the Commerce Department.
- Against the generally positive reception of the EO, critics also found fault, including the one widely noted weakness that executive orders, by their very nature, can go only just so far. There was broad agreement that congressional legislative action is vital so that the EO’s safety measures can extend beyond federal agencies to the private sector and withstand legal challenges that are sure to follow.
- Other criticisms pointed to the lack of attention in the EO concerning computing hardware, particularly high-performance graphic processing units (GPUs); the absence of enforcement mechanisms in the EO’s guidelines on labeling AI-generated content; and the ambiguities left in the EO’s provisions for red-teaming, especially what the government can do if red-team results conclude an AI model is dangerous.
EXPERT PERSPECTIVE
The Cipher Brief asked each of these leading cyber experts about their first impressions of the Executive Order on Artificial Intelligence and what impact they see from the EO moving forward. Their Comments have been excerpted and lightly edited for length and clarity.
Glenn Gerstell, Former General Counsel, NSA & Principal, Cyber Initiatives Group
Gerstell is a Senior Adviser at the Center for Strategic & International Studies. He served as General Counsel of the National Security Agency and Central Security Service from 2015 to 2020. He has written and spoken widely about the intersections of technology and national security and privacy.
"This is a really sweeping White House action that reflects months of inter-agency discussions. It is far broader than most typical Executive Orders, which are more narrowly targeted and aimed at a specific problem. This EO is aimed at an entire technology, AI — which in turn implicates virtually every aspect of federal action, from contracting to intelligence and from the Treasury Department to the Agriculture Department.
I mention that because in painting with such a broad brush, there's understandably a desire to be all things to all people. And because you have to deal with everything and everybody, if you leaving something out, someone's going to complain. So, the EO inevitably includes something for advocates of privacy, cybersecurity people, immigration reform, labor force transition, and so on.
The danger with something that broad is that the specifics will get lost, and the U.S. government can't prioritize everything. The White House can't be ‘pedal to the metal’ on every aspect of how artificial intelligence is going to be used.
If it decides it really wants to focus on the intelligence and defense industry, that’s fine but what about the financial sector or agriculture? This is going to put a strain on just the sheer ability of the executive branch to get its hands around this. And the way the executive branch does that, is to sort of disperse all this and say to each department, "You go figure it out, here are the big guidelines."
Of the various targets or recipients or audience for this Executive Order, the intelligence community is arguably the farthest advanced in the use of AI and already recognizes many of the concerns about the use of AI. So, there are IC guidelines on this already. When I was at NSA, we had internal rules, and the rest of the IC does too, but in varying degrees. The Intelligence Community has been using AI for well over a decade in a very sophisticated way, but admittedly not generative AI, like ChatGPT.
I think this just reinforces, in a good way, some of the work that the intelligence community has already done. And for other departments, it will be a big change because they're not anywhere near this far along."
Suzanne Spaulding, Former Under Secretary for Cyber & Infrastructure, Department of Homeland Security & Expert, Cyber Initiatives Group
Spaulding is a Senior Adviser for Homeland Security and Director of the Defending Democratic Institutions project at the Center for Strategic and International Studies (CSIS). She also serves as a member of the Cyberspace Solarium Commission. Previously, Spaulding led the National Protection and Programs Directorate, now called the Cybersecurity and Infrastructure Security Agency (CISA).
"It was interesting that the EO largely takes the military and intelligence piece out and says that will be dealt with separately. Still, there's obviously a lot in the Executive Order itself around the national security pieces outside of the technical military and intelligence.
It was very smart to set up these government-wide governance structures to try to get out earlier on AI than we did on cyber, for example, where we put in place this White House structure long after we should have. There's a clearer recognition that AI can't be put in one place, that it permeates everything and should permeate everything that the government is doing. Therefore, you need these strong coordination mechanisms, and you need governance structures that will move fast and to be able to allow all of the federal departments and agencies to benefit from the progress, insights and best practices developed by any one of them.
The discussion around the role of CISA was further proof of what I have long argued, which is that it's a good thing for CISA to have the kind of “all hazards” approach to security and resilience of critical infrastructure and not just cyber. A lot of folks think that CISA is just the cyber agency, but in fact, it retained the physical piece of securing and making sure that our critical structure is resilient. I think you see the wisdom of that in the instructions to CISA and the relevant sector risk management agencies and other federal departments and agencies…to work with CISA, to look at implications for physical security, cybersecurity, and other kinds of potential disruptions and problems in critical infrastructure.
Also, with regard to AI, we really have to think about radical transparency. In the early days of AI development, when folks were suggesting that the only way AI works is through this mysterious black box and it is impossible to go back and figure out how it arrived at these answers, that may be problematic. I think the pushback on that is really important. So, the look at reporting requirements across the board on AI development and the requirements for red-teaming and reporting on how these systems are being set up and how they're being used and the kinds of results they're producing, all of that is going to be really important. Transparency is going to be critical."
Mark Montgomery, Former Executive Director, Cyberspace Solarium Commission & Principal, Cyber Initiatives Group
Rear Adm. (Ret.) Mark Montgomery is a senior director at the Center on Cyber and Technology Innovation (CCTI) at the Foundation for Defense of Democracies. He directs CSC 2.0, which works to implement the recommendations of the Cyberspace Solarium Commission, where he previously served as executive director.
"The EO identified a number of critical issues and I think that it did a good job in describing the challenges with AI that we have both inside the government and in building a public-private partnership. It gave some good tasks to the government. I am concerned about two things, one, that a national security memorandum is directed to be put together on this by the NSC. In that sense, It's an unfinished product. Second, there are a lot of unfunded taskings, not uncommon in an EO, but it introduces risk to timely execution.
It's a good high-level framework for the U.S. government approach to AI. It has an adequate focus on security and on the potential threats. It definitely pushes the idea that we need to continue innovating and acknowledges that government should be a rapid adapter of standards for AI. It talks about the impact of AI on economic sectors like healthcare, transportation, telecommunications.
Where I think it becomes challenging is when the EO starts to describe how the government's going to deal with the private sector.
The EO talks about reporting and compliance requirements for the private sector, but the government must first build up the infrastructure to conduct this. While it doesn't say “thou shalt regulate,” it says we expect federal agencies to come back to us with recommendations on what kind of regulation may be required in each sector. It innovatively uses the Defense Production Act to push that. I think another good concept is that it moves forward the “know your customer” reporting requirements for cloud service providers.
By government standards, they got the EO out pretty fast, and I think they addressed a lot of the most important issues. What is in that national security memorandum will have to deal with how you prevent the worst-case uses of AI, things like someone trying to use generative AI to develop bioweapons.
Finally, the idea that “go regulate where you need to” — a lot of these agencies have weak or non-existent regulatory authority. For example, who's regulating the cloud right now? In that regard, I'm a little worried. Overall, I give the EO an ‘A’ for speed and effort, more of a ‘B’ for content, and ‘C’ for execution. Without resources, without clear regulatory authorities, I think they're going to have some trouble. The national security memorandum can fill some of that – but not the resourcing or authority gaps."
Michael Frank, Senior Fellow, Wadhwani Center for AI and Advanced Technologies, Center for Strategic and International Studies
Frank is a Senior Fellow at the Wadhwani Center for AI and Advanced Technologies, Center for Strategic and International Studies where he focuses on geopolitics and advanced technologies. He previously led the Economist Intelligence Unit's Asia technology policy research, where he pioneered applications of machine learning to macroeconomic and geopolitical research and analysis.
"The EO covers just about every intersection of AI and government and does so by putting broadly acclaimed principles at the core of the action: fairness and equity, competition, cybersecurity, talent, and safety. There is something for all stakeholders to point to in the EO and say, "our voice was represented." At the same time, the EO isn't overbearing. The requirements that apply to all agencies are eminently reasonable, such as mandating the appointment of a Chief AI Officer.
The EO is consistent with what we have been hearing from many executive agencies, that they do not believe they need new regulatory powers from Congress to address AI in their respective jurisdictions. They believe they have the relevant authority now to deal with AI in a constructive manner. Still, there is one big gap in this EO that only Congress can fill and that is visas for foreign AI researchers who want to come to the U.S. The administration is doing as much as it can through promoting the U.S. as a destination for AI talent, streamlining the visa process, and developing AI talent in the government, but Congress will need to raise the quotas for high-tech visas. Demand greatly exceeds supply, and the U.S. is failing to retain and attract leading AI talent not because of their lack of interest but because of our own negligence."
Hodan Omaar, Hodan Omaar is a Senior Policy Analyst at the Center for Data Innovation
Omaar’s current focus is on AI policy. Previously, she worked as a senior consultant on technology and risk management in London and as a crypto-economist in Berlin.
"Amid a sea of chaotic chatter about how to implement appropriate guardrails for AI, the new Executive Order sets a clear course for the United States. It provides industry with long-awaited guidance for AI oversight, including advising tech companies to adhere to the NIST AI risk management framework, watermark AI-generated content, consider the data used in model training, and incorporate red-teaming into testing.
However, while the general direction for AI oversight is clear, the specifics of implementation remain uncertain, which means both companies and regulators will need to navigate uncharted waters. For example, the EO calls for new standards for red teaming, biological synthesis screening, and detecting AI-generated content. These are all active areas of research where there are no simple solutions. Policymakers often forget that the reason industry hasn’t already adopted certain solutions is because those solutions don’t yet exist. This is one reason why it will be essential for the United States to continue to fund critical AI research in these areas.
Still, the focus of the EO on AI adoption is encouraging. The EO rightly includes steps to harness AI’s potential in education and healthcare, but achieving AI adoption at scale requires much more significant investment and detailed policy initiatives than the EO currently envisions. Similarly, the EO rightly focuses on promoting AI adoption in government, but the government should prepare for a marathon—not a sprint."
by Senior Cyber & Tech Editor Ken Hughes
Get involved in the conversation by signing up for the Cyber Initiatives Group newsletter and participating in expert-led discussions on cyber and AI. Find out more at the Cyber Initiatives Group.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief