Robert Griffin is the Managing Partner for DVI Equity Partners a Private Equity Investment arm of Diamond Ventures where he focuses on technology investments concentrated on delivering disruptive or disintermediating technology in areas of national security, law enforcement, critical infrastructure, and emerging trends.
Bob has been a key player and successful serial entrepreneur in the Software and Services industry for more than 40 years. In Oct. of 2011 he facilitated the sale of his company, i2, to IBM into their Industry Solutions, Software Product Group, where he remained as the General Manager for the Safer Planet and Smarter Cities brand until February of 2017.
Information warfare (IW) has become the most prolific battle space of our time. It is an environment — that by definition — is egalitarian in which to participate, it reaches and affects masses of people easily, it is difficult to establish attribution, and it is prosecuted in a clandestine manner. Information Warfare has become the new Cold War.
This is not a new problem; there have been Propaganda wars for hearts and minds since the dawn of time, but what makes this battle space the most challenging is the speed at which information flows and the variety of ways in which it can be consumed.
Due to the egalitarian nature of this issue, controlling the content associated with the information flow is simply a Sisyphean task. However, one of the keys to a successful IW defense will be the ability to control and identify the validity and veracity of the information being presented. Not just to provide a facts first approach, but to signal the content origination, pathways of propagation, and any new manipulation. While this challenge is complex when dealing with textual based sources (i.e. news stories, social media texts, and traditional written formats,) it becomes significantly more complex when dealing with time-based, auditory, and visual media (e.g. images, video, and audio).
Various studies have been conducted, and the unfortunate truth of the matter is that “false content propagates at a faster rate than valid content,” and false visual media propagates the fastest. People tend to believe what their eyes see and their ears hear more unconditionally than what they read. Given the acceleration and adoption of AI/ML technology, one of the largest and most emerging risks to the IW defense is the advancement of DeepFakes.
DeepFakes are a decisive factor in IW, whether at the nation state-level and/or in the commercial arena. DeepFakes can trigger what is known as the Mandela Effect (i.e. when a multitude of people believe that an event occurred, when it, in fact, did not). But more importantly, DeepFakes exploit a soft underbelly of our technological defense — the very detection approach we currently use can be used to train the adversarial AI systems to create more sophisticated DeepFakes.
The advent and popularity of open-source (OS) AI tools conjoined with OS code repositories has very much lowered the barrier to entry for those who wish to create DeepFakes. In fact, much of today’s heuristical-based detection techniques are predicated upon pre-existing OS libraries. Therefore, the accuracy of the detection outcome is all but certain. The current paradigm utilized by “the detection side” of building upon pre-existing libraries and using short-lived heuristical detection-based approaches will eventually result in a losing paradigm for the detection side. Yet, the challenge in the research labs, whether in academia or industry, are driven by metrics and the need to produce “low hanging fruit” wins (i.e. “detect what you can”).
In order to really understand the risk, I think it’s best to categorize the threat actors in three groups. The first group, “trolls,” is comprised of people, who launch disinformation campaigns against specific companies or people out of spite or dogma, not profit. The second group, “profiteers,” are those who seek pecuniary gain by using disinformation to damage a company. The third group, “foreign flags,” are state-backed entities directed to target private companies with fake news to cause brand damage so as to redirect business to a company in their own country. It is thought that as many as 70 countries have organizations that deal in this type of disinformation, and state-level near peer or great peer competition could be involved.
The problem with all this is that – at its core – this is basic economic warfare, and it is as old as history. We all know that in a digital economy, supplies are purchased with the click of a button, and we might get things delivered as early as that day or the next. However, information warfare is the new “sinking of merchant vessels,” particularly if fake news can prevent or redirect purchases. Our decisions are being influenced/manipulated amidst fast tsunami-like news cycles, a plethora of Big Data, and technological defense mechanisms that are not quite there. It is a Decision Engineering War.
Therein lies the real news. The DeepFake market is a disruptive, tectonic shift in the Big Data ecosystem; processable “Big Data” should no longer be considered “Big Data” unless validated through various validation mechanisms (such as for DeepFakes). Why would we want to pay to move DeepFakes, store DeepFakes, and distribute DeepFakes just to confuse and possibly drive up the cost of our decision-making processes? This road that we are on … we can’t win when we are spending this amount of money against the zero to low cost production of DeepFakes.
We are rooted in the idea that it is not a good strategy to fight a war on multiple fronts, I believe that decision has already been made for us because this new battlespace is, by its very nature, multi-front/multi-domain. Today, we are already fighting an Artificial Intelligence War, an Economic/Decision Engineering War, and a Big Data War. As John McLaughlin rightly noted in his May 31, 2020 Cipher Brief article, near peer and peer great powers may be moving ahead of the U.S. on several fronts simultaneously. He noted “artificial intelligence and machine learning.” I believe that it is also “advanced analytics, and the new Big Data.”
I worry that despite the new DeepFake legislation, the current solution set approach likely will not change, and we will continue to build a Maginot Line. We need to look at innovative, non-traditional approaches. We have to change our direction and our strategy.
I fear that the current path being taken by the technological community is just a tiny seawall that will not save us from the DeepFake Tsunami that is coming. Today, less than 0.5% of available data is actually being analyzed, and only a fraction of the hundreds of hours of video being uploaded every minute has ever been analyzed (and Shallow, at that, versus a Deep Analysis). We need to change the current strategy because it feels as if we are playing a game of chess, when our near peer and great peer competitors have gone on to play a multi-dimensional game of Go. My colleague, Dr. Steve Chan, has extensively examined this phenomenon and has developed insightful multi-domain advanced analytics algorithmics for this study space; this new algorithmic pathway is one of the innovative, non-traditional approaches, which I mentioned earlier.
These are tense times. History has shown that a single incident can incite a nation or even the world. Our society has problems enough dealing with real incidents; it simply can’t cope with a Tsunami of fake incidents. The quality, sophistication, and ultra-realism of DeepFakes is so high, they have the very real potential to sway voters, consumers, investors, and everyone. The fact that DeepFakes can be generated very quickly by relative amateurs at an ultra-low barrier to entry is alarming enough. But what of the Pandora’s Box that is waiting around the corner when they start generating DeepFake disinformation at enterprise scale? Today, we can barely deal with one or two.
Read more expert-driven national security insight, perspective and analysis in The Cipher Brief