As protesters filled the streets of Hong Kong calling for democratic rights, agents of influence were operating behind the scenes to affect the way the protests are perceived by different audiences. In Hong Kong, the images reflect a demonstration movement, but in China, the images are tailored to a narrative described by The New York Times as a “small, violent gang of protesters, unsupported by residents and provoked by foreign agents…running rampant, calling for Hong Kong’s independence and tearing China apart.” The Chinese efforts are textbook disinformation tactics.
Ahead of the 2020 election, the U.S. Government sees disinformation as a serious national security threat. The Office of the Director of National Intelligence says it is ‘preparing to confront a novel set of challenges related to the upcoming 2020 presidential elections amid proliferating disinformation threats.”
Social media sites like Twitter and Facebook are investing in efforts to identify disinformation but use of the tactic is a relatively easy one for the adversary and the best defense is for the target (you) to be able to identify a disinformation effort when you see it.
The Cipher Brief is running a special series on disinformation over the next several months. We’ll introduce you to experts in the field who will share ways to identify disinformation efforts, help you critically think your way through what you see and share tips on what to do when you do see it.
Experts say that getting the terminology right is the first step so that people who are often targeted by these campaigns can have a common understanding of the terms used to identify different behaviors.
Glossary of Terms:
Disinformation - Intentionally using false or misleading information to deceive or manipulate. Disinformation can come from an individual, group, intelligence service, company, or government.
Misinformation – Spreading false information without the intention to deceive.
Coordinated Inauthentic Behavior - spreading false information meant to deceive as part of a coordinated effort, whether it be with a group of individuals, a government, or a company. Inauthentic behavior can include things like running a network of fake accounts or buying "likes" to boost a social media post.
Troll - A real human user that posts inflammatory, abusive, or divisive content meant to trigger emotional responses. A company, government, or group may run a "troll farm" in which real humans run social media accounts intended to promote certain content, such as a particular political ideology.
Bot - An account that has been automated to perform a specific function, such as posting certain content or liking posts. Chatbots are able to interact with another account in a way that mimics real human interaction. Bots are becoming more sophisticated and can use real people making them harder to detect.
The Cipher Brief spoke with former CIA analyst and Intelligence Briefer Cindy Otis, about why understanding these terms is critical to understanding the impact they have – and will have - on the way we think. Otis is the author of a book coming out next Spring titled, True or False: A CIA Analyst’s Guide to Identifying and Fighting Fake News. This is her brief:
Cindy Otis, Former CIA Analyst
"For most Americans outside of the national security community, Russia’s interference in the U.S. presidential election in 2016 was the first time they had heard of foreign countries using sophisticated information operations to influence events in other nations. Since the extent of Russia’s influence campaign became public knowledge, the subject of disinformation campaigns—operations using false information meant to mislead or deceive—has become a popular topic of discussion in the press and on social media."
But the increased interest and awareness, and the focus on Russia as the primary threat actor, has muddied the language used to define and attribute disinformation threats. Many in the public sphere now frequently use the existing vernacular around information operations to discredit opponents or to tie trending activity on social media to a foreign influence campaign. For example, a post on social media with which we disagree becomes disinformation or fake news. Or an account that seems particularly argumentative becomes a bot or a Russian troll. Every hashtag or post that goes viral is seen as part of a large Russian information operation.
Cindy Otis, Former CIA Analyst
"Complicating the issue and blurring the language further, bad actors are shifting tactics to make their disinformation campaigns more deceptive and harder to detect, even as media outlets and ordinary social media users get better at spotting false information. For example, malicious actors are increasingly relying on manipulated content rather than outright false information. They are also moving farther away from running large networks of fully automated (or “bot”) accounts on social media to push out content and focusing more on human-operated or at least partially human-operated accounts that are harder to spot."
The kind of actors using disinformation or disinformation-like strategies are also changing in ways that are tougher to define. For example, the digital marketing industry and so-called “black PR” firms have been built around creating and boosting content, increasing website traffic, and making posts go viral in ways that are far from organic or homegrown. Whole political campaign strategies in places like India, the United States, and the United Kingdom have relied on those digital marketing firms to get content that is, at best, misleading about themselves, their political opponents, or policy positions in front of potential voters any way they can. They also use supporters as “social media warriors” to pre-coordinate and plan content to make it go viral. The content might be true, misleading, or intentionally false. Is such a strategy disinformation, politicization, or just normal campaigning?
As more and more actors delve into information operations and tactics evolve, the tendency has been to paint everything on social media that raises an eyebrow as disinformation. But the language we use around disinformation—what it is, who is doing it, and their tactics—is more important to get right than ever for several reasons.
First, imprecise language elevates the Russian threat, while downplaying or ignoring information operations conducted by other countries actively waging these campaigns around the world, such as China, Iran, and Saudi Arabia. In August, Twitter, Facebook, and Google removed accounts, pages, and YouTube channels that were part of a large and sophisticated disinformation campaign linked to the Chinese government to undermine the legitimacy of ongoing protests in Hong Kong. Additionally, since 2014, leaders in countries like India, the Philippines, and Brazil have also successfully used information operations as a key pillar in their election strategies.
Second, it distracts from homegrown or domestic information operations. Those who lean to the far-right and far-left around the world have waged disinformation campaigns to influence the outcomes of national elections. While a giant Russian conspiracy might be the more exciting explanation, the truth is that in most countries, the majority of false information circulating on social media applications comes from your fellow citizens. Foreign actors then just have to capitalize on and amplify what is already there.
Third, inaccurate language also inadvertently paints disinformation as simply an election issue when the threat is much broader than that. There are countless examples of bad actors using disinformation to influence everything from company stock prices to the broader economy, and to target activists, stoke or quell political protests, and sow social and racial discord.
Lastly, the language we use around disinformation also affects how we view and counter the threat. Taking the time to carefully consider and define the threat actors and their tactics can give national security agencies, social media companies, and technology platforms a better understanding of the signposts of a campaign. On the other hand, if national security agencies or private companies were to too broadly define disinformation, it could threaten free speech by silencing real voices or make them miss real campaigns that threaten our democracy.
Cindy Otis, Former CIA Analyst
"The sort of blurring of the language is exactly the point of sophisticated disinformation campaigns by foreign countries ultimately aiming to erode truth and shake the foundations of what we know to be true until we question everything. We must carefully consider what language we use in order to successfully identify and defeat these threats."
Don’t miss Cipher Brief CEO & Publisher Suzanne Kelly speaking with experts on disinformation at next week’s Intelligence and National Security Summit co-hosted by INSA and AFCEA in National Harbor, MD. Connect with us on LinkedIn and let us know if you’re going to be there.
Read more national security news, unique insights and expert analysis only in The Cipher Brief.