In recent years, Russia has waged a new cold war on the United States through its actions, though we have only recently come to grips with this reality. As in any war, the Kremlin’s objectives are political. The principal weapon in this conflict is information, and the evidence of Russia’s use of it in Europe and the United States is clear. With the advent of ever-expanding and precise communications technologies capable of manipulating public opinion at the individual level on a massive scale – in particular social media – the tools and tactics of influence developed over the course of the 20th century can now alter perceptions of reality to a degree that they can shape societies, influence election outcomes and undermine states and alliances.
Russia’s well-financed and deliberate intervention in American political dialogue, including the 2016 election, is part of a broader effort to: undermine America’s faith in its free institutions and diminish U.S. political cohesion; erode confidence in western democracies and the credibility of western institutions; weaken trans-Atlantic relationships, including NATO; and diminish the international appeal of the United States as well as reduce American power abroad, as I noted earlier this year in a co-authored report, Shatter the House of Mirrors.
As so much of America’s political dialogue takes place over social media, and with 67% of Americans receiving at least some of their news over social media, it is not surprising that this platform has become a target for its Russian agents as well as their bots and trolls in an effort to create trends and increase the popularity of false narratives. Alongside state controlled media outlets, such as RT and Sputnik, Russia has weaponized information and created the opportunity for much more political subversion much more effectively than the Soviet state ever could.
Specifically, during the 2016 elections, the Russians have been able to shift and shape the flow of ideas online both by flooding networks with too much information or disinformation. Through the use of bot-armies and paid internet trolls, they can make something trend on their own, resulting in not just more internet penetration, but reaction within the traditional news media audience as well where, for those networks and publishers, Tweets, Facebook likes and web-page visits all translate into more coverage.
For example, data from the state of Michigan in the autumn of 2016 suggests a flood of false information and junk news that is far out-of-line with similarly studied elections in Europe. In the first 11 days of November 2016, there was a three-to-six-times surge in “junk” tweets, and for every piece of professionally produced news circulating in Michigan that week, there were two other stories from little-known or patently bogus news outlets. Michigan was a surprise win for Donald Trump by only 11,612 votes or 0.3% of the total.
Similarly, by mid-September and mid-October 2016, researchers at USC determined that bots and trolls were producing 20 percent of the political content on that social media platform with 75% of that being “pro-Trump.” On Sept. 6, 2017, Facebook confirmed publicly that between June 2015 and May 2017, it had sold 3,000 ads for $100,000 to accounts and pages operated from Russia. Some of those accounts were traced back to the same Russian troll farm that had been reported even before the general election in The New York Times Magazine.
The 2016 election in the U.S was not the first time the Russians used this approach, nor will it be the last. They used similar tactics during the Russia’s parliamentary election of 2011 and during the Scottish independence referendum of 2014, and there is some evidence of a Russian hand during the debate over “Brexit.” No doubt, there will be echoes of Russian involvement in the Catalonian independence movement, and Russian social media propaganda still infects Twitter, Facebook and other social media outlets.
Significantly, the bulk of the paid content during the 2016 election was about issues that divide the American population, such as guns, race, LGBTQ rights and immigration, rather than any individual candidate. Russia’s focus on divisive themes began during the Cold War and has taken a foothold in the social media space with little sign of lessening, even since the end of the 2016 election.
Shining a light on these activities is one of the most important steps that the private and public sector can take to help combat Russian disinformation activities. While the focus should remain on the Russian provocateurs, the social media companies can take steps to improve their platform’s susceptibility. First, while legitimate user-generated content should remain governed by a platform’s terms-of-service, there is no doubt that political advertising and similarly sponsored content must be clearly highlighted as such and that ties to foreign governments should be conspicuously highlighted.
Twitter has already announced steps in this direction, and Facebook is working through these issues as well, including consideration of making the content of political ads publicly available. These are significant first steps, but they are just first steps. Importantly, the challenge goes beyond paid advertising.
The use of seemingly legitimate sights and handles raises a more difficult challenge: what to do not just about paid advertising, but potentially deceptive social media accounts that appear legitimate on the surface. Indeed, Facebook has just acknowledged that 126 million Americans may have seen posts put out through 120 fake Russia backed pages that have since been taken down. Examples include the “Heart of Texas,” which was designed to sow anti-Muslim discord and “Blacktivist” a Twitter account putting out racially divisive messages.
Clearly the Russians did not create the issues that cause division in the United States, but they are exploiting them and exacerbating the problems. Russia will overtly and covertly support organizations seeking secession or seeking to politically divide the United States, and they will covertly press protest movements to move towards the extreme and ultimately violence, just as they did during the Cold War. While social media companies must continue to ramp up their ability to detect these fraudulent sites (which likely violate terms-of-service agreements), ultimately, defeating Russian efforts will go beyond regulatory changes to social media platforms – there must be normative changes in this nation in terms of how we absorb and process information.
As Congress grapples with this issue, it must make education a corner of our defense and help to increase the public’s resistance to foreign influence and disinformation. It is not just a matter of teaching students how to think critically but requiring students to be well schooled in the art of media literacy. Specifically, programs will need to help provide people with the ability to distinguish ads from content and tell the difference between real and fake websites. Equally important will be reinforcing the need to cross-check information, to include inculcating the importance of actually clicking links in stories and reading past the headlines. On a more holistic level, our students and citizens need to be more conscious about being critical of what they read to include understanding, at least at a basic level, the difference between balance, bias and “truth.”
The public attention must remain focused on the broader Russian effort to undermine the political cohesion of the United States through breaking our faith in free institutions. Social media is but one vector by which the Russians are infecting the information space. In the end, the Russian propaganda tool has a classic foreign policy objective – one eerily similar to that of the Russians during the Cold War and long before the advent of the internet: weaken American power and influence at home and abroad and exploit the vacuum that creates to help restore Russian “greatness” and Putin’s global influence.