The Tragedy of Social Media Moderation

By Amanda Wilmore

Amanda Wilmore is a managing consultant with Guidehouse’s Open Source Solutions Team. She has over eight years of experience in performing open source data collection, analysis, and exploitation across the public and private sectors. Her work focuses on the development of methodologies that capture and extract insight from social media data as it relates to critical topics and issues that impact our nation and the world.

OPINION — Within today’s online environment, we’ve seen a billowing public awareness of “fake news,” the uncontrolled spread of disinformation (and its very real impacts), public acts of violence, and a national—and often heated—conversation about structural racism. More and more, “hate speech,” “extremism,” and “toxic rhetoric” are mentioned in the headlines we read each day. This prevalence is particularly true as these national issues relate to the use of social media platforms because the speed, availability, and connectivity that the internet and social media offer for extremist agendas is unparalleled. The events that took place on Wednesday in our nation’s capital acutely highlight the fever pitch of this reality.

Largely in response to intensifying public pressure, mainstream social media platforms are attempting to redress their malicious users. Indeed, platforms like Facebook, Twitter, and Reddit, which have long been resistant to “top-down” moderation, are taking formal action against users that create and promote hateful, inciteful, and/or false content on their platforms by banning/suspending accounts and affixing warning labels on some types of user content. Yet, by the time platforms do act, it is often too little, too late. Facebook and Twitter’s handling of Donald Trump’s inciteful content on January 6, 2021 is exemplary of this.

Put simply, content moderation as a strategy for quelling malicious users is shortsighted when it comes to dealing with the real threats these users and their content pose to our society and the world.

As more platforms increase proactive content moderation, controversial groups and individuals are increasingly moving to alternative technology— “alt-tech”—platforms to continue to spread their ideologies that have real online and offline consequences.

Four major social media platforms are among those that have recently enacted new content moderation policies in efforts to prevent the spread of “hate speech,” disinformation, and the use of their platforms to catalyze violent acts. Twitter began placing “public interest notices” on tweets containing content that violates its policy “against abusive behavior.” Twitter also updated its policy to include the ability to block content containing links that redirect users to sites promoting “hateful conduct and violence.”  Facebook has removed entire groups (like those associated with Boogaloo and QAnon) and banned thousands of individual accounts for spreading disinformation, organized hate, and content that otherwise violates the company’s policies. Reddit has introduced a revised content policy and effectively removed 2,000 subreddits (i.e., communities) based on their identified features of hate, conspiracy, and violence. YouTube has banned a large number of accounts associated with foreign influence operations.

Many applaud this targeted content moderation as “the right thing to do” morally; however, others view it as inadequate or partisan. Some consider the moderation measures cynical attempts to do just enough to avoid regulatory, legal, or other externally imposed controls, while their algorithms prove woefully inadequate to comprehensively recognize the human behavior and language usage that can violate platforms’ policies. Still others view the increasing content moderation as political censorship—and the infringement on users’ First Amendment rights—that further entrenches partisanship and limits the free exchange of ideas. The reality is that the consequences of current content moderation measures support arguments on all sides of the issue.

Yet, while the moral, constitutional, and practical debates labor on, a broader issue remains ignored. Specifically, how effective have these efforts been at achieving their intended goals?

While existing moderation measures may force a user spouting threats of violence and racial epithets from a social media platform with hundreds of millions—or even billions—of users, the measures are insufficient. Like outlaws running from local sheriffs in the Wild West, the malicious users and their followers simply pack up their hate and move to other sites where they are afforded more privacy, anonymity, freedom, encryption, and like-minded support.

Consequently, the current implementation of content moderation often amplifies the original threat by pushing malign users into “safe spaces” and deeper echo chambers for their hate, which can make identifying and tracking threats of real violence more difficult. This movement of malicious actors to alt-tech platforms cannot be considered a successful outcome of “effective” content moderation. Why then is this outcome so frequently absent from current discussions?

Well, contemporary conversations are so focused on ways that social media platforms should be handling extreme content that society seems to have given little consideration to the downstream effects of banning social media users, suspending accounts, and affixing content warnings. Mainstream platforms remove malicious users from online communities without forecasting the ways in which those users will adapt and continue to spread proscribed violent, inciteful, and hateful content.

As a result, the status quo of content moderation is an endless, reactive game of “whack-a-mole.” Platforms build iterative algorithms to find and recognize prohibited content and then remove the associated user or network—ad nauseum. Many users even learn from these experiences and adapt, cloaking their content or profiles to skirt the algorithms. For instance, the Islamic State has consistently been able to fool and bypass Facebook’s algorithms by adjusting their online behavior while still preaching virulent ideological extremism and advocating for violence.

The movement of malicious actors on the internet is also alarming. Over the past decade, the removal of users from mainstream social media sites has increasingly metastasized the proliferation of toxic rhetoric, disinformation, and other malicious content on the web. It has resulted in the development and popularity of alt-tech platforms, many of which were developed specifically in response to content moderation efforts and the banning of a user/users.

While proactive content and community moderation is an essential, albeit delicate, online need, social media platforms cannot be solely responsible for defining and moderating what is and is not acceptable in our society. After all, social media and other internet-based service companies have adamantly denied any responsibility for the actions of their users—cloaking themselves in the “legal shield” of Section 230 of the 1996 Communications Decency Act. Thanks in part to increasing efforts, such as the recently introduced Protecting Americans from Dangerous Algorithms Act proposed by Rep. Tom Malinowski (D-NJ) and Rep. Anna Eshoo (D-CA), the American government is showing signs of at least trying to focus its aperture on resolving the current lack of accountability based on its determinantal side effects for our democracy.

Based on past behavior, it’s certainly not altruism driving these companies to moderate their platforms now. Consider the 2019 Christchurch, New Zealand, massacre at two mosques that was carried out live on Facebook, leaving 51 people dead and another 40 people wounded. The public outcry following this tragedy sparked action from Facebook. Social media platforms are increasing content moderation efforts to protect their businesses and to avoid government or independent regulation. But should public pressure and waves of PR crises be the principal drivers of platform accountability? Hopefully, the recent storming of the capital building makes the answer to this a clear and definitive “absolutely not.” Leadership and resilience are the abilities to do challenging things in challenging times—to lead the way. Choosing to ban specific users or restrict their inflammatory content once the damage has been done is simply not mitigatory or effective. It never will be.

If we are to effectively tackle the outstanding issues of online extremism and the incitement, hate speech, disinformation, violence, and the other negative consequences that it all too often fosters, we cannot consider reactive content moderation by private companies sufficient to effectively diminish the threat these issues pose online and offline. Yet, exposing and describing this phenomenon and the growing echo chambers of extremist views and those who are being radicalized by them is only a first step. We know these individuals are slowly poisoning our society and communities. We know these individuals are increasingly turning online hatred and incitement into real world harassment, radicalization, threats, and violence. The events of January 6, 2021 are but one egregious exhibit in a long list of real-world tragedies that confirms these facts. Therefore, it is crucial to develop a multifaceted counterstrategy to mitigate and combat the atomizing of these threats.

First, we must be more proactive and strategic, and involve the right people. Specifically, we need to include in the conversation regulators who are knowledgeable about the internet and social media, researchers who focus on the intersection of technology, communications, and radicalization, law enforcement who are often the ones to deal with situations when the incitement and toxicity mutates into actual violence, and members of civil society who are involved in the rehabilitation of extremists or those at-risk alongside social media companies that can provide expertise relating to their platforms. This community of interest is critical and foundational to building a comprehensive and cross-community solution to redress online hate.

Moreover, conversations to address these issues should be responsive and strategic; they should be immediate, frequent, and ongoing, not limited to a regular cadence of semiannual meetings with lofty theoretical objectives. Indeed, the pendulum of regulation should not wholly swing in the opposite direction of what we see today and become the sole responsibility of the current federal government apparatus. Rather, we recommend forming a diverse community of interest composed of federal government, state and local officials, and private citizens from various backgrounds to develop and oversee activities on social media platforms that fall outside of the First Amendment. The intention of this unit would be to act as an independent agency that protects what is arguably the largest marketplace of ideas, much like the US Securities and Exchange Commission does with our laissez faire financial markets.

Given the unprecedented threat environment that social media and its interconnectedness has brought about, this community of interested individuals and agencies should be a largely dedicated body of diverse experts capable of developing and agreeing to independent policies (i.e., regulations) that can help drive change and enforce accountability comprehensively; investigate claims of misuse and abuse of platforms and by platforms; track and catalog the threat-related trends within the social media environment; and otherwise anticipate and watch for an evolution in online tactics by ill-intended actors.

Read more expert-driven national security insight, perspective and analysis in The Cipher Brief

 


Related Articles

Search

Close