SUBSCRIBER+ EXCLUSIVE INTERVIEW - The Cipher Brief CEO and Publisher Suzanne Kelly spoke Wednesday with Ben Buchanan, advisor to the White House Office of Science and Technology Policy and the Biden administration’s Special Advisor for Artificial Intelligence (AI). Their conversation was part of the Cyber Initiatives Group summit, and it covered the opportunities and challenges of AI, threats to the nation’s infrastructure and the integrity of elections, and what the White House is most focused on when it comes to the world of AI.
This conversation has been lightly edited for clarity.
The Cipher Brief: Let me start a bit broadly and ask, What do you see as the leading challenges when it comes to national security that AI is best able to address right now?
Buchanan: I think there's a lot we can do with the technology, and one of the things we're working on right now is the National Security Memorandum on AI, and that's making sure that we harness the technology as an intelligence community, as a Department of Defense and broader national security community, in a way that advances American interest but also in a way that lives up to American values. Basically every part of our national security community can put this technology to work. We've got to do it in a way that's responsible, but we see a lot of opportunities really across the whole enterprise. And that document's meant to make sure we're seizing those opportunities.
The Cipher Brief: That's a memorandum that you're currently working on?
Buchanan: Right. It's due at the end of July, so you can hold us to that deadline.
The Cipher Brief: OK, great. The president issued an executive order on AI last October that was really focused on seizing the opportunity or the promise and managing the risk, but even the order acknowledged that a lot of work really needs to be done to be able to do this effectively. How's that going so far?
Buchanan: Good. So we tasked out a number of deliverables in that executive order. We have met a hundred percent of them on schedule. We did a 90-day update not too long ago where we had a tracker that showed everything that was tasked that was due at 90 days, and we'd done all of those things. Probably the most significant one, where we're doing a lot of institution-building, is setting up the AI Safety Institute at the Department of Commerce. And we've brought over a great inaugural director, a woman named Elizabeth Kelly who worked on the executive order when she was at the White House, and she's building a very strong team there to develop standards and capability evaluations and guidance to make sure AI systems are safe so that we can seize the opportunities, exactly the way that you just mentioned.
The Cipher Brief: A big part of the way forward when it comes to the development and security of AI involves working with the private sector. What have you learned in conversations that you've had with leaders from the private sector since this executive order was released?
Buchanan: The work with the private sector has gone back before the executive order. We had the companies come to the White House and make voluntary commitments to the president about what they were going to do, even before the executive order, to make sure the systems were safe, secure and trustworthy.
But I think both before and after the executive order, one of the things I really value the most from the private sector is a very detailed technical understanding of how fast this technology is moving and what it is capable of. And I think this gets to a broader point that's really important, which is this is really the first technological revolution, certainly in probably at least a hundred years — maybe you can go back further — where the U.S. government is not the one inventing the technology. If you look at the early days of the internet or the space race or microprocessors or radar or certainly nuclear weapons, you had a very strong government hand in shaping the direction of the technology. Now, you did have that in AI in the 1960s. But given the current paradigm we're in today where it's very private sector-driven, I think we really value the conversations with the executives and also with technical experts at the companies to understand what the technology is capable of and how we can manage it.
I should say we're also talking to a wide range of folks beyond just the private sector. It's also civil society and academia and the like.
The Cipher Brief: It's an incredible challenge. As you laid out, it's the first time that the U.S. government really has to get very heavily involved in something that is rapidly being developed outside of the U.S. Is it changing the way you're thinking about approaching what the government can and should do here?
Buchanan: I think it is changing the way we recruit people to the government, because that is fundamental to making sure we can meet the moment. And with the executive order, we launched AI.gov, which really was one-stop shopping for all the AI jobs in the government. You’ve been around government to know, it is a surprisingly herculean effort to collate all the different AI jobs from all the different agencies and put them in one place, which we did. And we also expanded our fellowship programs and our rapid-hiring programs to bring in a lot of talent. A good example of an agency that's doing this is the Department of Homeland Security, which has created an AI corps where they've hired 50 people or so to come in and be AI experts with technical backgrounds and the like. And we have been flooded with interest on AI.gov and on the various fellowship programs from folks with technical skills who want to come work for the government.
So I think that's really been the biggest change for us, is making sure in addition to the great people we have, we're getting tech-savvy folks who maybe have never worked in government before, never thought about working in government before, but are responding to this moment.
The Cipher Brief: I was just on the West Coast at a big investor summit, and one of the speakers was saying that one of the things that is really limiting the U.S. right now in terms of its competitiveness and technology is immigration policy. Are you talking and thinking about the future of immigration policy when it comes to how it's going to impact the U.S.’s ability to stay competitive on AI?
Buchanan: For sure. So we have a lot of homegrown talent. It's important we have an education system that develops that homegrown talent. America has a huge percentage of the world's great universities that develops a lot of our own talent, but it also attracts a lot of talent and I think the executive order tries to make sure we're bringing that AI talent to the United States as well. And there's a section of the executive order that tries to streamline through state and through DHS our processes for bringing in talent from abroad.
As you know, there's pretty significant limits to what you can do through executive action alone in this area. Mercifully, Congress and immigration are not in my lane, so I don't do as much with that. But I think it's fair to say that we recognize this as a huge advantage the United States has. It has been historically, it will continue to be in the age of AI, and we're trying to lean into getting as much talent to the United States as possible.
The Cipher Brief: We're really still in the early days of AI advancement. There's huge promise and risk as you already know, as the U.S. is really in an AI race with China. What is the White House doing with regulators and others to help us move faster when it comes to AI defense and innovation?
Buchanan: I think there's an overarching point here, which is the talent point, right? So we want to have AI talent in the government to regulate the technology. And that's not just at the White House. It's also throughout the (government). So that's the first thing.
But then in terms of the broader question of how do we actually regulate this technology, we've taken a two-pronged approach. The first is, for applications of the technology in particular regulated sectors — such as healthcare, financial services, housing, those sectors that have been regulated with good reason for a very long time — we have continued to vest the authority to regulate AI in those sectors with those regulators. And the logic here is pretty straightforward, which is if something is illegal without AI, it is illegal with AI. That's true for discrimination and bias in the healthcare system and things like that, and certainly it's true for things like AI safety in the healthcare context and elsewhere. So that's the first part.
Then the second part is, How do we manage the safety and security of what's often called frontier systems, these really powerful general-purpose systems. And this is where we've set up the AI Safety Institute at the Department of Commerce. Now, the AI Safety Institute is not itself a regulator. It's at NIST (National Institute of Standards and Technology), which is not a regulator. But it sets standards and guidance, or is in the process of setting standards and guidance that companies have voluntarily said they will sign up to in terms of their voluntary commitments. And then we also use the Defense Production Act, also through the Department of Commerce, to make sure the companies are performing the safety tests that they said they would perform, and they're sharing those results with us. So it's not a neat one-stop shopping for regulation, but in our view, nor should it be, given how broad AI is.
The Cipher Brief: It seems that chips are all produced by just a handful of companies. And it takes years to really be able to create the infrastructure to be able to produce those chips inside the United States. How can the U.S. build more chip foundries and plants in the US? What are the biggest challenges there?
Buchanan: The biggest challenge, of course, is it's very hard to do, as your question suggests. And this I think is one of the most complex, if not the most complex thing we do as a species. It’s incredibly intricate, truly amazing work.
The president led the way in getting the Chips and Science Act passed in the summer of 2022, and the Department of Commerce and other agencies have been implementing that. And I think we actually saw some big grants go out to companies like Intel and others just this week to make good on this promise of bringing the chip manufacturing and chip supply chain back to the United States. It will take years to do, but we've made some big investments that are very strategic in making sure we have production here in the United States because it is so vital.
The Cipher Brief: How would you rate this on the priority scale for you on a day-to-day basis?
Buchanan: We have a whole program office at the Department of Commerce, the Chips Program Office, that manages this, administering, I think it's $40 or 50 billion, so this is their job. But I think it's fair to say we work closely with the Department of Commerce on AI, including on this. And we know that American competitiveness in AI depends on chip manufacturing and chip supply chains, because AI is so inextricably intertwined with that technology.
The Cipher Brief: Axios this week spotlighted a growing front for nation-state and industrial cyber espionage: breaking into AI developer systems to steal company secrets. Experts now are predicting that this new front is going to grow in the next five years to equal the threats that are faced by semiconductor companies. What can be done to strengthen AI system defenses, especially in startup firms that may lack solid security systems?
Buchanan: There's no doubt, we've seen recent cases as you suggested where leading AI firms are a target of espionage from abroad. It makes perfect sense - this is an incredibly valuable, important technology. There's key algorithmic secrets each of these firms has. So I think every firm in this business should recognize that top-tier cybersecurity is a key part of being in that business. And that's true in AI in the same way that's true in the defense industrial base, and the same way it's true in the financial sector and the like. So I think certainly we can encourage and work with firms and have done some of that at a reasonably high level. But at the end of the day in the United States, the private sector has a lot of the burden of defending itself in how we've set up our cybersecurity system. I'm not sure these AI firms or any firm want US government people defending their systems from the inside with all the complexity that would entail, but there's no doubt the threat is real here.
The Cipher Brief: Let's talk about AI threats specific to China. How are you looking at competitiveness on a national security scale with China when it comes to AI?
Buchanan: There's a couple of different pieces of it. I think let's start with probably the most significant action we've taken related to AI in China, which is the chip controls that we put in place in October of 2022 and then updated in October of 2023. And those controls were motivated by a desire to stop China from using this technology to modernize its military, which it is certainly trying to do, and stop China from using this technology to oppress its people.
And China is not a democracy and there certainly is, I think, a lot of potential for technology like AI to be used to entrench autocracy, to entrench repression, to violate human rights. And we are very worried about that. We’re worried about that abroad, and we're also making sure that as we use this technology, we don't use it in ways that are inappropriate and the like. So I think that's probably the area of focus for us with China right now: military modernization, and the domestic repression.
The Cipher Brief: Rob Joyce is just leaving an incredible career of service at the National Security Agency. He was saying that he's pretty comfortable in terms of the nation's ability to ward off AI-driven disinformation in the runup to this November's presidential election. How are you feeling about that in terms of your confidence that the government's going to be ready to ward off efforts on that front?
Buchanan: A good rule of thumb in the U.S. government is never to disagree with Rob Joyce because he's almost always right. Rob certainly has had a tremendous career. And it's sad to see him go.
We have to continue to be vigilant on the disinformation front. And I think there's what you might call “capital D” disinformation and state actors and the like that AI might enable, but there's also things like fraud - a lot of it is the same technology, but could affect Americans every single day. And we’ve tried to take steps like banning voice cloning and so forth to make sure we are guarding against that. And part of the voluntary commitments that companies made to the president…is a commitment to watermarking AI outputs and the like. So we are continuing to try to encourage that and facilitate that at a technical level and at an industry level, because I think it's so important to guard against the threat of disinformation in a big-picture, foreign government sense, but also in the sense of everyday fraud.
The Cipher Brief: We just hosted a virtual event at which experts were really worried about scenarios in which AI created false narratives that surface in that very quick time crunch, 24 to 48 hours before people go to the polls. And there might not be the response time to say that something is false. Are you thinking about rapid-response measures using AI to counter some of these issues?
Buchanan: I don't know how we would use AI, but I think I certainly agree with the concern here. If you look at past activities, I think it was before a French election a number of years ago when there was an effort to try, right before the polls, to put out disinformation. So I think we have to be ready for the whole range there.
The good news is this is something that we are well aware of and Caitlin Durkovich and her team at the White House lead this effort. So I would defer to them on the particulars of what it is they are doing for this problem, but suffice it to say they're very aware of it and are preparing.
The Cipher Brief: How do you see the public's role in this? Do they have any chance of understanding, without using other technology, to tell you what's fake and what's not? What do you see as the public's responsibility in fighting disinformation?
Buchanan: I think certainly there is an element of public resilience and critical thinking and all of that that's going to be very important. And I suspect that we are rapidly entering a world in which the notion of a deep fake is becoming very widely understood. People will see them themselves not just in a malicious context, but in a movie context or in a YouTube context and the like. So, folks I think are developing an understanding of what AI-generated video can do and audio can do. And I think that's important for building this resilience and understanding of the world we're living in.
The Cipher Brief: Representatives of over 40 countries, signatories to the political declaration on responsible military use of artificial intelligence and autonomy, are participating in a two-day meeting this week at the University of Maryland. I'm wondering what outcomes does the U.S. have in mind for this inaugural meeting on ethical uses of AI by military organizations?
Buchanan: The Department of Defense, to their great credit, I think, has thought a lot about autonomy in warfare, autonomy in weapons systems, long before we had Chat GPT. For that matter, long before we even had the algorithms and the transformer that enable Chat GPT. So the Department of Defense has something called (Directive) 3000.09, which is the binding guidance in this area of how autonomy fits into warfare to ensure that we're using this technology in a way that's appropriate. And the political declaration on military use of AI…that is meant to begin this conversation with the broader international community to try to set some norms of behavior here in a way that makes sure the technology is being used appropriately in combat and in preparation for combat. So I think this is a kickoff opportunity to set those norms.
The Cipher Brief: I thought I might wrap up our conversation with a scenario that came up on The Cipher Brief’s radar. During recent testimony before Congress, there was an expert from CSIS who posed a really interesting question, which is: What would the impact on national security decision-making have beenif AI had been available during the Cuban Missile Crisis? It was Benjamin Jensen who drew out a couple of possible effects of what that might've been like, saying with AI applications ranging from imagery recognition to generative analysis of adversary intentions, suggesting there would have been a tendency to speed up the crisis, when it might make more sense to slow down decision-making and be a little bit more deliberate. But I'm just curious if you have thoughts on things like that and what goes through the mind of the White House special advisor on AI.
Buchanan: I have not thought about the Cuban Missile Crisis example. It's an interesting one, but I think it's fair to say that part of the intelligence work that was fundamental to the Cuban Missile Crisis probably could be transformed by AI. The ODNI (Office of Director National Intelligence) has an effort to use AI for intelligence analysis as part of a broader human-led cycle of intelligence analysis. And I think that's what Ben's getting at, in that example.
Imagery analysis is one of the areas that is particularly ripe for this, which of course was critical to the Cuban Missile Crisis, because computer vision is a key part of AI. The National Geospatial Intelligence Agency is an agency that's using a lot of AI and is very public about how they're trying to use more AI.
But I think if you go back to the history of the Cuban Missile Crisis, and it's been a while since I've done this, a lot of the challenges were about communication between the two sides. There wasn't sufficient crisis communications, and there weren't sufficient mechanisms to pass messages. I think at one point, they were passing messages through the TV correspondents who are covering the crisis. So I think there probably is a lesson we can draw from the Cuban Missile Crisis, which applies even more urgently today, and that is the importance of crisis communications and being able to talk respectfully and responsibly to foreign governments to alleviate crises.
And we've tried to work very hard to build this technology out. And in fact, President Biden and President Xi talked about managing AI risks and the need to do it together when they met in San Francisco in November. So there's a recognition on our side at the most senior levels of the government, of the importance of imagining this technology and how it could change very fraught and very important international dynamics.
The Cipher Brief: Let me just ask in closing, What's top of mind for you? Do you have three-month, six-month, nine-month targets, where you say this has to be done? Are there things that you see as more critical than others when it comes to the national security components of artificial intelligence?
Buchanan: I think that two of the big things coming up, both tasked by the executive order, are what's called the Office of Management and Budget M-Memo, or management memo, which is due on the 150-day mark, that's due next week. And that governs how the federal government, aside from the national security community, will use AI. And this is an opportunity for us to operationalize a lot of the ideas we put together for the Blueprint AI Bill of Rights that we published even before Chat GPT in October of 2022. So that's one.
And then the other one is, as mentioned at the top of our conversation, the National Security Memorandum on AI, which I think is really important for how we're going to use it on the national security side and also how we're going to counter adversaries and competitors from using it. So I think those are probably two deliverables — one due in a week, one due in three or four months or so — that are very much on our radar screen, and we're working very hard on them with the broader interagency.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief