How Machine Learning Impacts National Security

Agenda Setter

In the latest edition of our State Secrets podcast, Cipher Brief COO Brad Christian talks with  Brian Raymond who works for Primer, one of our partners at The Cipher Brief’s Open Source Collection, featured in our M-F daily newsletter.

At Primer, Brian helps lead their national security vertical, in other words, their intelligence and military customers, but also their broader federal practice. That means he’s involved in everything from sales, to advising on product development, and overseeing current customer engagements.

What follows is a lightly edited version of the State Secrets podcast.

The Cipher Brief:  Let’s dive in on these emerging technologies and how they relate to national security. We’re talking today about machine learning and artificial intelligence. These are terms that are starting to be more prominent in what we see on the news and what we see reported and talked about. But frankly, most people still don’t have a good grasp of what they mean. Can you give us a high-level perspective on what it means when we hear the terms artificial intelligence and machine learning and why do I need to care about them?

Raymond: That’s a great question and maybe I’ll just back up a little bit, in terms of what really sparked my interest when I joined Primer and why this field is so exciting.  Previously I was at the CIA, primarily as a political analyst, as well as having served in a number of additional roles. And then from the CIA, I went down to the white house and served as a country director from 2014 to 2015 and so was able to see the intelligence collection process in terms of analysis and decision making from a number of different angles. Fast forward to 2018 when I had the opportunity to join Primer. I am not, a tech expert by any means. I don’t have a background in machine learning or artificial intelligence but what really sparked my interest was seeing what machine learning and artificial intelligence are beginning to be capable of doing to accelerate mission. At the highest level, machine learning is different from general artificial intelligence in that machine learning is leveraging what’s called a neural net. I explain it simply as trying to replicate in some ways the structure of the brains where you have neurons and synopsis, to build very complex models, with sometimes hundreds of millions of nodes, in order to help automate some type of process that’s been done by a human today.

And so, let’s just, unpack a few practical examples of this. Some examples are probably common to most of the listeners on this podcast and those are object detection or object recognition. We can feed an image into a particular algorithm and determine, okay, is that a dog or is that a car? And that’s a problem that has been being focused on for probably the last 15 years or so and that algorithms are getting really, really good, to the point where they’re being fed into self-driving cars and weapon systems. And there are a number of different areas where that technology is beginning to take hold. So instead of having huge teams of humans that are clicking dog or, car and, sorting different imagery, this can now be done in an automated fashion at scale and at speed by algorithms. And so that’s one example and it’s something that really took hold and became operationalized and was able to start being injected into workflows about six or eight years ago. There’s still a lot of work being done, but now it’s being commercialized and it’s becoming increasingly mature. There are other areas of AI that are a lot thornier and there has been slower progress on, but one of those is the realm of natural language, human spoken language, think like Siri on the phone or Alexa.. And so at the highest level, algorithms are intended to help accelerate and augment rote tasks that humans are undertaking in order to free them up to work on higher level tasks.

The Cipher Brief:  These issues you’re talking about are critical, not just, to make life easier, to make things more efficient, but also so that America and the military can maintain its innovative and technological edge, which is being challenged for the first time in a serious way. In terms of machine learning and natural language processing, what are some of the ways that you are seeing this operationalized in the national security space and what are some of the things that we should be looking for in the next three to five years?

Raymond:  That’s a good question. I’d probably break it down into three key messages here, to respond. The first, and I’ll unpack each of these, but the first is that there has been a lot of learning that’s been occurring over the last several years. It’s occurred from actually pairing operators and analysts with algorithms to really impact mission and it requires a partnership, and in some cases, an entirely different organizational model that exists within these national security organizations today. And so, I want to talk about this partnership approach that’s required in order to use and fully leverage these algorithms. The second I’d say is that in, especially in the world of natural language processing but also more broadly, we’ve seen just an absolute explosion in the performance of these algorithms over the past 18 months. And this has largely gone under the radar in the national news and most of the publications that I and others read. But we’re really in a golden age right now and a lot of new and exciting use cases are being unlocked because of these performance gains. And then the third thing I’ll talk a little bit about is that the use cases are becoming more crystallized and maybe that’s where I’ll begin here… that, especially for natural language algorithms and I’m talking natural language processing, natural language understanding and natural language generation.

We’re teaching algorithms to not only be able to identify people, places, organizations, and to understand and then to also generate new content based on that. The use cases for that, quite frankly, have been a little mysterious. So, with Primer, we were founded in 2015, before this golden age was spawned. In late 2018 with the release of an algorithm developed by researchers at Google, called Bert, we saw these algorithms were brittle.  They were good at narrow tasks, but it required a lot of training and there were difficulties when trying to port across to different document types. And so, with those constraints it was really challenging to find wide channels to play in and to really add value for the end users. But I think today there are three use cases that transcend all of our national security customers where we’re finding that the algorithms are really fantastic and augmenting what humans are already doing.

I’ll just call the first one ‘finding needles in a haystack.’ Suppose you are concerned with supply chains and you have maybe 5,000 suppliers that you care about for some type of complex system that you’re building, and these suppliers are distributed globally and you’re concerned about disruptions to the supply chains or malicious acts for example. But how do you monitor news or bad things happening for 5,000 companies. That’s a lot of Google news alerts for example. We’re able to train algorithms that continuously scan hundreds of thousands or millions of documents looking for instances in which, some small supplier may have been subject to a cybersecurity attack or their headquarters burned down or their CEO is caught in some scandal and immediately cluster all of those articles or reports around that and then surface those for review.  So that’s one use case. It’s this finding a needle in a haystack problem that today is being done by, large groups of people and they’re not even able to wrap their arms around all of the information that’s being consumed.

The second use case is ‘compression and summarization.’ There was a study done recently that looked at analysts and said if you were a country analyst and you were covering… just a mid-tier type country in terms of how much is written on it, for example, Paraguay. In the mid-nineties you may have had to read about 20,000 words per day in order to stay up on what’s going on with that particular country. Fast forward to 2016, so four years ago, and you had to read around 200,000 words per day in order to stay abreast of developments. And the forecast was that between 2016 and 2025 it was going to increase tenfold, so from 200,000 to 2,000,000 words per day. And so, this is the amount of information that’s available and required in order to stay ahead of developments, whether or not you’re covering a country or a particular organization or a company or an issue. It’s growing at a logarithmic pace and so you can’t hire your way out of that problem. You need to find ways to compress and summarize all that information and you do that by pairing analysts or operators with algorithms. That compression and summarization is the second key area that users and organizations are finding tremendous benefits.

And then the last use case is what we call breaking the left screen, right screen workflow. And so this is kind of a broad workflow that’s existed probably since the Dawn of modern intelligence analysis World War Two, where I’m reading reports that are coming in on my left screen, and then I’m taking insights from those reports or details that are relevant or that I care about, and I’m curating it and some type of knowledge graph on the right screen, which today might be an Excel sheet, or a Wiki, emails, Word document or it might be a final report. We’re getting really, really good as a machine learning community, at automating that jump.

So, find all the people in these 10,000 documents and then find all the details about these people and then determine how all these people are linked to one another. And do that continuously and create new profiles for people that are mentioned including those who are just popping up, and then show what new information has been discovered. Then this has the potential to unlock hundreds of thousands of hours of manual curation still being done in 2020 and we’re finally at a point with the performance of the algorithms where we can begin automating a lot of that work and freeing people up to do what they’re best at which is being curious, pursuing hunches, and thinking about second or third order analysis. And so that’s what’s really exciting about where we’re at today.

The Cipher Brief:  What are you seeing in terms of acceptance of these new approaches amongst these organizations? Because we’re still in a time where there’s a disparity amongst skill sets and knowledge and understanding as it relates to not just advanced technology but basic technology in many organizations.

What you’re talking about now is a bleeding of that line between, some of the most advanced technology that’s out there, working with people who may not understand it or may not be open or accepting to it. What do you see in terms of how this is being accepted and practically used in organizations where it may come into contact with someone who’s not from a tech background and has to learn how to work with this new technology and trust it, most importantly.

Raymond:  That’s a great question.  It brought to mind something that Eric Schmidt said a couple of years ago, which was “The DoD doesn’t have an innovation problem. It has an innovation adoption problem.” And I think there’s a lot of truth to that but within that, since Mr. Schmidt made that quote, there’s really been some incredibly exciting developments, across the IC and DoD. I think, for us we’ve benefited just tremendously through our partnership with In-Q-Tel which you know originally was the, venture capital arm of the CIA and represents the IC and DoD, but also through really innovative programs like AF WERX, which the Air Force has stood up in order to rapidly identify and integrate technology into Air Force’s mission.  There is also work that’s going on with the DIU and the Joint Artificial Intelligence Center and then some additional work that’s going on with the Under Secretary of Defense for Intelligence. And what we’re witnessing is this explosion of activity throughout the space and creating novel and exciting contracting pathways, with huge amounts of money being invested in artificial intelligence and a new sense of urgency that you didn’t see before.  Secretary Esper has been continually saying that artificial intelligence is one of the most, if not the top priority for the Department of Defense. And we’ve seen that reflected in the budget for spending this year. And we also see it in the posture of organizations that are recognizing the need to innovate. So, I think that’s the good news side of it is that we see prioritization and pathways and funding.

The challenge with all of this is that although algorithms and learning solutions reside in the realm of SAAS solutions, they’re fundamentally different than buying the, Microsoft Office Suite and getting it loaded onto your computer and then using it. So, let me just share an anecdote here.  At Primer, we have an office in DC, but we also have our primary offices in San Francisco, in the financial district. And every single day we look out the window and we see dozens if not hundreds of self-driving cars pass by our building. And all of these self-driving cars have all of the different electronics that are on the roof and everything, but you also see people behind the wheel and so that means that there is training that’s underway.  Tesla and almost all the major auto manufacturers have “self-driving” cars, but it’s under limited circumstances, in specific conditions. The reality is that there is just an enormous amount of training that is required still, in that realm of self-driving vehicles in order to make it a consumer product.

Now with that context, when you’re talking about deploying, object recognition and natural language algorithms to the hardest of the hard problems that the IC and DoD are grappling with and at the speed that they’re having to grapple with, in order to deploy that type of solution, organizations fundamentally need to be reconfigured in some cases in order to make maximum use of machine learning solutions. What that means practically speaking is the need to train often in the expertise needed to train the models. Models are only as good as the training and the training data that they receive. And so usually that expertise resides on classified networks and in the heads of officers and analysts and operators that are engaged in this every day. And so, it requires their continuous engagement. There are also serious questions that arise from training models to perform a specific task, and who in the organization now owns that model, is in charge of updating that model and reviewing training data that’s being produced across the organization, integrating that and how should organizations think about that?

And then finally, where do we really want to go in terms of what tasks are going to get us the most bang for the buck early on? There’s a figure that almost 80 or 85% of commercial AI innovation initiatives have not delivered what the folks initially thought they would. We think that that number will come down as average performance increases and as there is learning that occurs both on the customer side and on the company side, but these are still fairly early days and these are difficult technologies to effectively leverage and so it really does require a really tight partnership from the top down in these organizations in order to make it a success.

The Cipher Brief:  What’s your estimate on when we’ll see this, adopted and we’re comfortable with it and its part of our everyday life in the national security community?

Raymond:  That’s a great question. I think soon. So, there’s, one thing we haven’t talked about very much and that is the infrastructure that’s being put in place, that is going to unlock a lot of opportunities. That infrastructure is the cloud infrastructure. And so obviously the Jedi contract with DoD has been in the headlines recently but having a common cloud infrastructure with the computing power that that entails, and the ability to move data across enclaves really easily, is absolutely essential for operationalizing machine learning solutions at scale. And so, getting that foundation in place I think will really unleash a lot of innovation, in different areas that may not have it but can see it. Coming back to the topic of performance gains, for example a task of finding people or locations or organizations in documents. If you hire a group of people, do some training and you have them go through a thousand documents a piece and ask them to find all the people, places and organizations across all those documents, usually around 95% precision is typical for humans. Humans will miss some or may not realize that two different people’s names are spelled differently in there, so the same person or so on and so forth. We’re now at a point where we’re approaching 96%-97% precision for a number of these tasks, and that’s just in the past six months. And so, these, gains where we’re at or above human level performance on specific tasks, I think that will gain traction quite quickly.

And then finally, the workflows for how we integrate this at scale, for these organizations have really started to clarify as well. And that’s, this concept of, we call it CBAR, but you’ve got to connect to the data sources first.  And I think this common cloud infrastructure is going to unlock some opportunities there. We see a lot of really cool and exciting innovation going on there on the connect side. You have to connect, then you have to build the models and so really lightweight, easy user interfaces for training the models that now exists and are being used.

You have to then unleash those models on the data and analyze that data and inject it into workflows. And you know, that’s stylizing as well and then you ought to be able to feed all of those insights back out into whatever products or systems the end user cares about through, APIs or various reporting mechanisms.  And that connect, build, analyze, report, flow, a couple of years ago was not clear that that’s how this needed to be architected. I think that that’s, now going to be the standard. And, with the infrastructure coming into place, and the learning that is occurring with these organizations I think we’re going to witness a virtuous cycle for innovation.

The Cipher Brief:  Any final thoughts? If you had to give one takeaway for our folks, these organizations, the national security community that you’re talking about just from this conversation, what would it be?

Raymond:  The overarching message that I would communicate is commitment. These initiatives, whether or not it’s in the natural language domain where Primer is playing or other machine learning domains, they really require an investment of an organization and a commitment in order to make use of it. And it’s just a fundamentally different problem set than a number of other, technological solutions, whether or not they’re hardware or software solutions that we’ve seen over the past couple of decades. And that’s a risky endeavor. But, as you mentioned earlier, our competition, the Chinese, the Russians, the Iranians and others are making big investments in these areas, they are vertically integrating, and we’ve always taken a different approach than that. And what we’re seeing here is just unbelievable innovation coming out of the technology sector. A lot of companies that are incredibly eager to do business with the IC and DOD and contribute to their missions and the level of technological maturity reaching a point now where, a lot of opportunities have now been unlocked that didn’t exist even in 2019 or 2018 for them.

Read more expert-driven national security insights, analysis and opinion in The Cipher Brief


Agenda Setter

Leave a Reply

Related Articles