SUBSCRIBER+ EXCLUSIVE INTERVIEW — Almost since the term “Artificial Intelligence" (AI) was coined, assessments of its impact have included forecasts of staggering ability to improve productivity and quality of life, together with frightening scenarios of its disruptive and dangerous potential.
National security experts see that dichotomy and are racing to harness AI for intelligence gathering, mindful that many adversaries have other ideas.
The state of these efforts, and of AI itself, were the subject of a high-level conversation during The Cyber Initiatives Group Summer Summit (CIG). Iranga Kahangama,Assistant Secretary for Cyber, Infrastructure, Risk, and Resilience Policy at the Department of Homeland Security (DHS), The Hon. Susan Gordon, former Principal Deputy Director of National Intelligence and Teresa Shea, former Director of Signals Intelligence at the National Security Agency (NSA) joined in conversation with Microsoft's Kelly Bissell talking candidly about the the potential perils and opportunities for good posed by AI-driven technologies.
While government officials and outside experts disagree as to just how worried we should be about the power of AI, this much is clear: the technology will have outsized impact in many corners of commerce, critical infrastructure and national security. “There’s no putting this genie back in the bottle,” Gordon said. “There’s no longing for a simpler time when we don’t have to wrestle with these issues.”
THE EXPERTS
This excerpt of the briefing has been lightly edited for brevity and clarity.
Bissell: What’s the good and the bad of AI? What can we learn at this point?
Kahangama: I think the good is that it's such a game-changing technology. It's a revolution in and of itself. And there's good and bad with it.
All the benefits in the economy and the security world, when you talk about automating security and automating tasks and being able to more nimbly identify security risks — I think those positives outweigh the negatives right now. Doing better surveillance of goods and cargo or identifying license plates for law enforcement. And sifting through big swaths of data and really being able to leverage technologies for the array of missions that DHS has to cover. We want to be an exemplar of how that's used.
The bad is that there could be some over-eagerness to adopt some of these technologies. We don't want to be first to market, but first to secure.
I also think there's a need or demand from some critical infrastructure operators that feel like they must use it, but if they're not leveraging or securing these technologies in a way that is appropriate, you're going to have things like data leakage or data loss or not understanding what the guardrails might be. So there's some risk of early, immature and premature adoption, but I think that's going to be few and far in between.
And then likewise, the bad is that where there is an existing risk, AI is going to put it potentially on steroids, expand it, make it even harder. Whether that's phishing emails or deepfake video or audio, those are all amplified as well.
Shea: It's a dual-use technology. Like most technologies, it has both the good and the bad.
We've certainly seen that play out in the cybersecurity space. It's about the speed that it's bringing to both the defender and the attacker. And when the defense is able to use it, that's the good – they're able to do better defense faster, and maybe stop those threats or identify who the culprits are.
The bad is that the attackers are using this tool too. Data can be manipulated, it can be mislabeled, it can be poisoned. You have the adversarial AI attacks, which we've seen a lot of. Autonomous systems, especially autonomous cars. And then the privacy concerns. Data often contains personal information, and so there's always going to be privacy concerns.
But to the good: this could have huge impact on big issues, our national security and in the intelligence space, being able to do open-source analysis, being able to do analysis in languages much faster, multilingual approaches, if you will.
Across every sector you could probably come up with an example – healthcare diagnostics, finance, fraud detection, et cetera – where AI can make a difference for the better.
Gordon: I certainly am on the side of there's no putting this genie back in the bottle. There's no longing for a simpler time that we don't have to wrestle with these issues. There is no possibility either for the government or for the private sector to opt out. It's just going to move beyond you, and so you're going to have to do it.
I think the prospect of heretofore intractable problems becoming tractable with this technology is really exciting. Healthcare is one. Third-party risk management – that is just so phenomenally manual that there's no way our humans with questionnaires are ever going to get a hold of that. And I can imagine that being done for our own intelligence community.
The vexing issues of our time, where the volume and the speed have overcome our ability to do it manually, there I think AI is really exciting. It's just a great research assistant now, right?
The bad is we don't have the talent supply chain within organizations to be able to do all the critical decision-making. Notwithstanding all the really good things that are happening on the policy and the security and the standards front with AI, it is still a new enough technology with big risks, and we have a supply chain problem.
And another bad is that we still have the technologists thinking about this, more than we have the users really doing the hard pull.
And then there's what I would call the ugly. The ugly is this conundrum of, Is this really something that we can allow to be done without the government being involved?
There's still this hesitancy to match our skills together, for the government to be strong enough and open enough and the private sector to actually care enough and feel responsible for the security, to develop this in the way it needs to be developed. I think that is still something that we have not resolved.
Bissell: Iranga, what's your view on public-partnership around AI? So that we can move with speed, but with those guardrails in place?
Kahangama: This is the first technological revolution that we have seen that is completely owned, operated, and facilitated by the private sector. People fail to realize that all the other major revolutions we've seen in technology, at least in the security space, originated from the government. The internet, nuclear weapons, all of these things originated from government.
And that's not a value judgment on whether it's better or worse – it's the reality. And from a government perspective, it puts us a little bit more in a reactive seat in terms of how we leverage those technologies.
But that fact in and of itself (shows) we need to be even better on the public-private partnership side. The good news is that we have made a concerted effort to really change what that relationship looks like with the private sector in a really good way. We've tried to learn and build from the lessons of how we engage with not just the private sector and not just the tech sector, but writ large, stakeholders in the state and local community and critical infrastructure, civil liberties – we've tried to bring it all together. And I think it's a really good example of how we can have candid and frank conversations.
Shea: There's a lot of fear among people out there today worried about losing their jobs, and this robots-taking-over panic going on. And we have to recognize that and train and upskill our people first and foremost, and keep them trained and upskilled.
And then when we talk about these guardrails, safety first, safety always, build that in. We learned this in cybersecurity, perhaps the hard way, that we need to build security and safety in from the design stage.
We have an election coming up, and the use of AI in deep fakes has gotten better and better, and we need to do something about that. The federal government has their toolbox and they use the tools in their toolbox – regulation, laws, et cetera. And we haven't thought that through enough yet, and there's not enough unity here in this space, but we're getting there.
Bissell: Are we in a little bit of an arms race between the good and the bad? Using this technology for good and bad?
Gordon: From an actual integrity-of-election process, the distance traveled since 2016, largely through DHS and with industrial partners and with state and locals, is absolutely remarkable.
That said, that doesn't mean relax and rest and anything like that. The adverse of that is I'm so worried about influence, in part because we know that it’s a tool of our adversaries and competitors. We've seen it. And it's so difficult to counter, given the environment we have right now where our institutions aren't trusted.
And it's against that backdrop, where we know that the intention to undermine free and open societies is prevalent, and this is globally – you look at the elections in Europe, there's just a mistrust that makes it hard to have systemic resilience.
And despite all the good work of the private sector, this is not what they're focused on right now. As a matter of fact, many of them are keeping their heads down because they don't want to get into the fray. It would not take much to create a deep fake of someone saying something that would have a significant effect. If I could do anything right now, I'd probably launch a “The More You Know” campaign on TV for the public.
Because what we do know is that – and we learned this in Ukraine – if you're expecting it, you can counter it. But we haven't talked about that as much. And this technology could produce the kind of deep fakes that would cause destructive effect before we could counter. We'd detect it, but it's just the time gap would be too much.
We’ve got to do something.
Kahangama: The way the administration and execution of elections are a state and local issue, we're also going to have to do a lot of work to make sure that state and local officials are aware of foreign influence activities. I think they are the ones that are considered the effective communicators.
They're the authoritative voice when citizens are looking for information about elections. It's not really the feds that are going to have an authoritative voice.
And those are more simple things. Are state and local election officials armed with how to have a communications plan? Do they have platforms and websites available to put out accurate information, receive questions? Do they have an understanding of tools and resources, how to have contingencies? How to have connections with their local FBI field office or their local rep?
Bissell: We're talking about deep fakes and we're only five-ish months away from the election. If the bad actors are using AI as a weapon, are we ready on the defensive side?
Shea: We all know there's a lot of research on tools to identify deep fakes in the government, iand there's a lot of investment in the private sector in being able to identify manipulated media. Here's my call to action: Everybody that's got something, please open-source it. There's been several cases where open-source tools have been adopted.
Let's get them out there, even if they're not perfect, let's get the 80% solution out there. And then let's start using them to critically think about whether what we're seeing is real or not.
Gordon: On information quality, we are not making enough progress. Information quality, much like cybersecurity, is a universal good. It helps free and open societies. Everyone should care about it. I'm going to say we need more investment dollars on technologies and systems that are predicated on the idea of information quality.
I mean, I love that OpenAI and Apple are partnering. I'm not thrilled when they say, "And we will protect your data." Really? Neat, right? But show me.
There are so many information issues today that are information-quality issues that have nothing to do with influence. The Boeing Max was an information-quality issue. Bad sensors, bad data inputs, bad data to compare it to, and bad opportunity for pilots to be able to sustain the situation. So on AI, let's make a simultaneous national global push for information quality. There is nothing about that that shouldn't be universally in everyone's interest.
Bissell: Other calls to action?What's the one bit of practical advice that everybody pursuing AI should think about?
Kahangama: One big thing, and I've tried to live this mantra: if you are in a position of thinking about AI and implementing or influencing policy or decisions about AI, just use it. I think there is a dearth of people actually using the technology. There are harmless ways to use it in your personal or professional life that really just acclimate you to it.
You should be able to dabble with it and you should be able to understand generative AI, what the limits are, really understand what it can be used for. Because engagement within usage allows you to open your mind about what the possibilities are, both positive and negative, and then the downstream effects of it. Just getting yourself on it, using it, will empower you in a way that opens up more questions. It makes you a more capable policymaker around it.
Gordon: Every entity should be thinking two thoughts right now.
What processes that I have are suitable for improvement with the (AI) capabilities that exist? Because there are companies and institutions that are making massive, significant, 10, 20, 30% improvements right now. So everyone should be saying, "What processes in business, back-room things, can I apply?"
And then the second is, What part of my business is going to be changed forever when this technology becomes present? And think about that. And if that means you're not in the right business anymore, then get out of that business.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief.