Social Media on the Political Agenda

 

Bottom Line Up Front

  • Later this month, French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern will co-host a conference with the stated objective of regulating violent extremist content online.
  • The conference is in response to the live-streamed and downloadable March 15 terrorist attacks at two mosques in Christchurch, New Zealand.
  • A fierce debate continues to rage in Western democracies about the extent to which social media should be regulated, and whether social media platforms do it themselves or if governments must.
  • Well-intended efforts in restricting inflammatory and even malicious content tend to have unintended consequences and second and third order effects.

The spread of violent extremist content online continues to pose a serious challenge for Western democracies. On May 15, New Zealand Prime Minister Ardern and French President Macron will co-host a conference called ‘Tech for Humanity,’ which will include the technocrats and officials from the Group of Seven (G7) in Paris. The Ardern-Macron initiative will attempt to get attendees, who include tech giants like Facebook and Twitter, to come together with government representatives to forge an agreement dubbed the ‘Christchurch Call,’ in response to the social media element of the terrorist attacks in New Zealand in mid-March. This initiative seeks to have conference parties agree that social media was ‘used in an unprecedented way as a tool to promote an act of terrorism and hate’ and then to ensure that a live-streamed attack can never happen again. Beyond this specific matter, the conference will address the complicated issue of social media platforms continuing to host violent extremist content.

Blocking the live-streaming transmission of a violent crime in progress is not controversial from a free speech perspective, but it remains uncertain exactly how effectively companies like Facebook can be in preventing violent live-streams from occurring in the first place. Further, once the images are uploaded, they are quickly disseminated and downloaded – meaning they are nearly impossible to contain completely. Companies can remove sites that host that content, but proactive action is difficult, especially given current laws.

The particular matter of live-streamed violent content is part of a much broader conversation about the use of social media by violent extremist groups to spread propaganda to their followers and encourage new people to adopt their ideology. Whether private companies or governments should be in control of social media regulation is a matter of ongoing debate. In the U.S., private sector companies like Facebook and Twitter are not subject to significant government regulation. Companies have their own Terms of Use and can ban or suspend accounts for a range of reasons. But as the spread of dangerous propaganda and ideology continues, and with disastrous consequences, the appetite to impose government regulation, even in the U.S., grows. And, as social media platforms increasingly operate as news providers, they should be subject to regulation against blatantly misleading information and lies in the ways that television and print media is subject to regulation.

While much attention and resources have been spent on addressing the proliferation of violent jihadist content online, only now is attention being paid to how white supremacist groups are using these same platforms extremely effectively. Censoring this type of ideology, which has a long history in the West and particularly in the U.S., has found less critical support than has taking down violent jihadist content, many believe it remains imperative for governments and private firms alike to aggressively address this growing issue.

 


Related Articles

Israel Strikes Iran

BOTTOM LINE UP FRONT – Less than one week after Iran’s attack against Israel, Israel struck Iran early on Friday, hitting a military air base […] More

Search

Close