Could AI-Driven Info Warfare Be Democracy’s Achilles Heel?

By Doug Wise

Douglas H. Wise served as Deputy Director of the Defense Intelligence Agency from August 2014 until August 2016. Following 20 years of active duty in the Army where he served as an infantry and special operations officer, he spent the remainder of his career at CIA.

Ripped out of the pages of science fiction novels, artificially intelligent, self-aware weapons systems could become a global existential threat, if harbingers of doom like Elon Musk are to be believed.

What I’m more worried about is combining artificial intelligence with a weapon that is already hurting us, and our democracy: the weaponization of information.

Our adversaries are already using sophisticated cyber tools and delivering these info-weapons into the heart of our social and political fabric by attacking our information systems, media outlets, social media and our political processes. By using these weapons against our greatest vulnerabilities, adversaries can trigger a digital rot within our social and political structures which can have as significant effect as full scale war.

And our greatest vulnerability? It is our democratic structures and processes. Healthy democracies require a well and factually informed electorate along with a political and social environment which allows for the co-existence of divergent views, yet has processes which allow societies to resolve those different views and, perhaps agreeing to disagree, can move forward with a negotiated purpose. And done in a way where all of this is managed by a government (trusted by the electorate) which protects, nurtures and invests in not only the processes but the outcomes.

Though effective and stable, democracies are inherently fragile. Our adversaries have recognized this and begun to turn our own strengths and our own democratic processes against us. Through the structured use of weaponized information, we have been subjected to both overt and clandestine efforts to undermine our core democratic principles and they have used, and continue to use, highly effective digital tools to misinform and mis-educate our electorate and create impossible-to-heal divisions.

Engineered at the enterprise level, these tools appropriate the truth, cloak it with sophisticated falsehoods and are deployed to exaggerate and instantiate the differences in our society to the point where there can be no resolution to our disagreements and do this with the ultimate goal of hemorrhaging trust between ourselves and our governing institutions. By using us against ourselves we’ve already become unsure of our identity as a nation and our core values will become uncertain and conflicting. It is possible that the only shots fired during these “wars” will be those fired to quell civil unrest.

While AI-driven autonomous weapons systems are in the realm of science fiction, the U.S. intelligence community’s report of January 2017 on Russian meddling in the U.S. election clearly shows that sophisticated attacks using weaponized information are being made against the U.S. and other democratic nations now. That’s NOT science fiction but science fact.

This is the bad news. The only good news is that these attacks – the packaging of misinformation, the crafting of the message and procurement of the delivery mechanisms — are today being hand-crafted by actual human beings. It is a manpower-intensive effort, combining “art” with expertise that is hard to scale to the point where the volume and number of the attacks are substantial, and not just directed against one country but many countries simultaneously.

By carefully linking the themes of these attacks, the adversary could create the perception that extreme views are widely held and almost universal – but that still takes a lot of humans driving a lot of keyboards.

No matter how capable Russia’s Internet Research Agency (IRA) must be, it does have real physical and resource limitations. While the IRA workforce is talented and has a deep understanding of U.S. politics and culture, their boutique efforts can only produce a finite amount of products which look authentic, appear to be from a U.S. origin, and hit at the heart of an issue so as to create the maximum amount of divisiveness, confusion and disruption.

These kinds of efforts can’t scale when using a human workforce. As evident from the investigations into the Russian efforts to disrupt the 2016 national election, the size of the effort by IRA and others was modest at best, not because the Russians kept the effort at a modest level, but because they did what they had the resources to do. They had to make a tradeoff between the size of the effort and its effectiveness.

Because of the impact that human-enabled technologies have already had on our democratic processes, it is useful to explore and consider the ramifications of the same 2016 process but project this into a time when these attacks are driven and enabled by very sophisticated AI systems.

This is the existential threat to us today and in the foreseeable future; I fear this far more than being targeted by self-aware autonomous weapons systems. It would be time well spent for Musk and other well-meaning scientists, technicians and business people to be concerned about the non-lethal threats posed by near self-aware attack mechanisms.

Done right, AI is two things: scalable and effective. Both of these attributes will be exceptionally useful to those mounting the next generation of attacks on western democracies. No longer will those (such as Russia’s IRA) have to make the hard tradeoffs of scale versus effectiveness. AI systems will allow adversaries to exponentially expand the scale of the attacks, the rate of the attacks, and the number of targets including the ability to link attacks among multiple targets.

Given that these attacks are only effective if authentic, AI systems can use big data exploitation on a person’s (or institution’s) pattern of life (to include identity recognition in its many forms) to create precisely tailored malicious messages whether in text or video/audio impersonation.

AI chatbots will be able to mimic human behavior to a degree of authenticity where they will easily pass the Turing Test and have longer and more authentic interaction with the target.

While this can be done today to a degree, these efforts are relatively small and limited scale, expensive propositions done on a bespoke basis.

But tomorrow, it will easy and cheap. Like traditional cyber intrusions where the access point can permit collection but also allow the injection of malicious code, the more sophisticated AI systems of the future will be able to generate malicious influencing messages in unlimited quantities, and also likely be able to access and corrupt or manipulate private data in ways that are undetectable.

Can we do anything about this and, if so, what can we do? At this point, perhaps we can’t do anything more other than be aware of the potential for increased harm caused by malicious use of AI. While I disagree with Musk’s assignation of AI as an existential threat to the existence of mankind, I do believe he is on the right track when he says we must study, monitor and attempt to create international covenants in order to reduce the likelihood that the IRAs of the future are enabled by unimaginably capable technologies.


Related Articles

Search

Close