Social Media—A Tool for Terror?

social media and terrorism

The prevalence of terrorist organizations using social media generates a host of new challenges for online platforms, policymakers, and governments. Specifically, the global, highly accessible, and fast-evolving nature of social media provides a particularly lucrative platform for terrorist organizations to promote their ideologies. While there is a growing demand for responsible and accountable online governance, the lack of effective content moderation policies, transparency, and cultural understanding continues to facilitate harmful content on social media platforms. To meaningfully tackle these issues, it is crucial that national governments and lawmakers consider a combination of policy and legislative solutions.

Although the terms of service of many leading social media companies stipulate that terrorist content is forbidden, the lack of effective content moderation processes fails to effectively turn policy into practice. For instance, Facebook’s Community Standards state that organizations that are engaged in terrorist activity are not allowed on the platform; however, what is classified as ‘terrorist content’ under Facebook’s policy is a highly subjective question under which the platform is given complete discretion. Additionally, “by its own admission, Facebook continues to find it challenging to detect and respond to hate speech content across dynamic speech environments, multiple languages, and differing social and cultural contexts.”

For instance, in Myanmar, the lack of content moderators who speak local languages and understand the relevant cultural contexts has allowed for terrorist content to proliferate. According to a United Nations investigation, Facebook’s platform was utilized to “incite violence and hatred against” ethnic minorities in Myanmar, leading to over 700,000 members of the Rohingya community fleeing the country due to a military crackdown.

Despite being aware of these repercussions, Facebook neglected to deploy the necessary resources to combat hate speech as at the time there were only two Burmese speakers employed at Facebook who were tasked with reviewing problematic posts. Hence it can be argued that in some of the world’s most volatile regions, terrorist content and hate speech escalate because social media platforms fail to employ the necessary resources to moderate content written in local languages.

In Myanmar, this lack of policy oversight caused inflammatory content to flourish and harm local minority populations. To address this issue, social media platforms should not only hire local content moderators but also consider developing a partnership program with local individuals and NGOs. Developing a local partnership program would create an effective communication channel wherein members of the local population could report hate and terrorist speech directly, thereby enabling social media content moderators to address harmful content and mitigate potential damage more quickly.

The lack of effective mechanisms to curb terrorist content is further compounded by the lack of transparency in implementing hate speech and anti-terrorism policies. For instance, in response to the ongoing conflict between Russia and Ukraine, Facebook announced that it would temporarily modify its hate speech policy to allow Ukrainian users to voice their opposition toward Russia’s attack on Ukraine. This promoted Russian authorities to institute a criminal case, causing Facebook to retreat and clarify that this was never intended to condone violence or terrorism against the Russians. This example not only exemplifies the widespread political influence social media platforms have with respect to geopolitics but highlights the platform’s role in both undermining and facilitating online speech to advance a specific belief or agenda. Additionally, the lack of transparency about the circumstances under which Facebook can modify or temporarily suspend its Community Guidelines demonstrates the “highly subjective” nature of Facebook’s policies that “open the door to biased enforcement.”

Instead of relying on social media companies to self-regulate their business practices, a legislative approach should be adopted. Given the global threat of terrorist organizations, governments have a large stake in ensuring better responses to terrorist organizations’ use of social media. For instance, the EU Regulation on Terrorist Content imposes a fine on social media companies who fail to remove hate or terrorist speech within a certain time frame. However, while the threat of regulation may be sufficient to hold certain social media platforms accountable, it is likely that government regulation would place smaller platforms with limited resources at a disadvantage.

Additionally, given the international nature of social media, there is a risk that multiple regulatory regimes across various jurisdictions would create a fragmented regulatory landscape, further complicating the situation. It is also unlikely that national governments would have the technical expertise and experience to conduct terrorist content removal with the same effectiveness as the private sector. Instead, government regulators should collaborate with social media companies to ensure that there is a unified definition amongst all platforms as to what constitutes ”terrorist activity’,” “terrorism” and “hate speech.”

A unified definition of these key terms and a list of active terrorist organizations from national governments would reduce uncertainties when it comes to determining what should constitute a violation of policy. Moreover, a cohesive approach would ensure that a single platform’s own religious, economic, and political affiliations does not unjustly silence important voices.

Overall, although there is no clear roadmap as to who or how online terrorist and hate speech should be monitored, it is crucial that given the widespread and far-reaching impacts of hate and terrorist speech on social media, effective content moderation strategies and policies remain paramount. While it may be unfair to impose a regulatory approach on smaller social media companies, it is crucial that large social media platforms such as Facebook and Twitter deploy the necessary human and technological resources to monitor, regulate, and mitigate the very real human cost that continues to be perpetrated by their platforms.

By Ruhi Kumar

Ruhi Kumar is a graduate of Georgetown Law's Technology Law and Policy LL.M. and is currently working as In-house Legal and Compliance Counsel in Washington, DC.

Leave a comment

Your email address will not be published. Required fields are marked *