The key question as representatives from Facebook, Twitter and Google testify Tuesday and Wednesday before Congress is not how Russia used social media to interfere in last year’s presidential election, but rather what role U.S. voters, the federal government and social media companies should play in building resiliency against such disinformation campaigns in the future. However, in the short-term, collaboration between the government and private industry to institute transparency of ads may minimize the impact of nefarious foreign actors.
- Tuesday, Colin Stretch, the general counsel of Facebook, Sean Edgett, the acting general counsel of Twitter, and Richard Salgado, the director of law enforcement and information security at Google, will testify in front of the Senate Judiciary Subcommittee on Crime and Terrorism to discuss extremist content and Russian disinformation online.
- Wednesday, Facebook’s Stretch, Twitter’s Edgett, and Kent Walker, Google’s senior vice president and general counsel, will also testify in front of the House Intelligence Committee following a earlier hearing at the Senate Intelligence Committee on the same topic.
While lawmakers are likely to criticize the social media executives for the exploitation of their platforms this week, the companies will likely respond that no private company should, in a free society, determine what is true and what is not. Social media, hailed as a powerful communication tool and a great equalizer by internet idealists, has also been clearly demonstrated as a critical element in amplifying deceptive narratives to susceptible audiences.
- “Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information,” wrote Colin Crowell, Twitter’s vice president of public policy, government and philanthropy in June. “This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth.”
- The scale of use of these platforms also means policing propagators of disinformation – foreign or domestic - among billions of other users will never be perfect.
However, Russian use of these prominent internet platforms was merely one aspect of what was a much larger and more coordinated Russian effort to undermine faith in Western democratic institutions. Solving the social media problem will not solve the Russia problem, but experts agree that, given the continuing success of Russia’s interference in U.S. politics, both the Kremlin and other nefarious actors will execute similar information operations against vulnerable open societies in the coming years.
- Dezinformatsia, a Russian umbrella term for so-called disinformation operations, seeks to muddy the political and social waters of adversaries, undermining public trust.
- Disinformation is spread through both overt state-sponsored media, such as Russian channels RT and Sputnik, and covert operations, such as weaponized hack-and-leak operations, cutouts, and compromising material, or Kompromat.
- According to Facebook’s own statements, the Kremlin employs a network of paid trolls – most notably the Internet Research Agency – to amplify divisive opinions and misinformation to exploit societies’ political flashpoints, from immigration and racism to gender identity and gun rights. Between 2015 and 2017, the troll farm reportedly posted about 80,000 times – over 200 posts a day – and that roughly 29 million people received the content in their news feeds, and at most, another 126 million may have been exposed to the Kremlin-directed disinformation through likes and shares.
- According to data from six of the 470 Russian Facebook pages that have been identified by the company– namely Blacktivists, United Muslims of America, Being Patriotic, Heart of Texas, Secured Borders and LGBT United – Russian disinformation content had been shared over 340 million times, and therefore likely magnitudes greater, given that it represented merely slightly over 1 percent of the known Russian sites.
- Facebook disclosed that known Russian agents bought some $100,000 in advertisements, or around 3000 ads total, targeting specific demographic audiences and geographies such as critical election swing states, including Michigan and Wisconsin.
- Google has acknowledged that Russian trolls uploaded over a thousand videos to YouTube on 18 different channels.
In the short-term, increased transparency of the sources and funding of ads on social media will minimize the risk of manipulation. Social Media companies have begun to implement this transparency, but online ads are often automated, making vetting and policing difficult.
- Last week, Twitter announced that it will ban Russian state-sponsored media channels RT and Sputnik from purchasing ads and will require election-related ads for candidates to disclose who is purchasing them and how they are being targeted.
- Rob Goldman, Facebook’s vice president in charge of ad products, said the company is designing new tools that will allow users to click on a link to see all the ads any given advertiser is running, even if it did not initially target them. Goldman also said the company will build an archive of federal election ads that appear on Facebook, including the amount spent and number times an ad is displayed.
Ultimately, informing users of potential disinformation may be the primary way to navigate the complex and easily-manipulated digital information landscape of the future. How consumers of news on social media can effectively distinguish fact from fiction will be the real bulwark against Russian disinformation, not the policing of content from privately owned platforms.
Twitter, Facebook, and Google did not respond to requests for comment.
Levi Maxey is a cyber and technology analyst at The Cipher Brief. Follow him on Twitter @lemax13.