By Ruhi Kumar
The prevalence of terrorist organizations using social media generates a host of new challenges for online platforms, policymakers, and governments. Specifically, the global, highly accessible, and fast-evolving nature of social media provides a particularly lucrative platform for terrorist organizations to promote their ideologies. While there is a growing demand for responsible and accountable online governance, the lack of effective content moderation policies, transparency, and cultural understanding continues to facilitate harmful content on social media platforms. To meaningfully tackle these issues, it is crucial that national governments and lawmakers consider a combination of policy and legislative solutions.
Although the terms of service of many leading social media companies stipulate that terrorist content is forbidden, the lack of effective content moderation processes fails to effectively turn policy into practice. For instance, Facebook’s Community Standards state that organizations that are engaged in terrorist activity are not allowed on the platform; however, what is classified as ‘terrorist content’ under Facebook’s policy is a highly subjective question under which the platform is given complete discretion. Additionally, “by its own admission, Facebook continues to find it challenging to detect and respond to hate speech content across dynamic speech environments, multiple languages, and differing social and cultural contexts.”
For instance, in Myanmar, the lack of content moderators who speak local languages and understand the relevant cultural contexts has allowed for terrorist content to proliferate. According to a United Nations investigation, Facebook’s platform was utilized to “incite violence and hatred against” ethnic minorities in Myanmar, leading to over 700,000 members of the Rohingya community fleeing the country due to a military crackdown.
Despite being aware of these repercussions, Facebook neglected to deploy the necessary resources to combat hate speech as at the time there were only two Burmese speakers employed at Facebook who were tasked with reviewing problematic posts. Hence it can be argued that in some of the world’s most volatile regions, terrorist content and hate speech escalate because social media platforms fail to employ the necessary resources to moderate content written in local languages.
In Myanmar, this lack of policy oversight caused inflammatory content to flourish and harm local minority populations. To address this issue, social media platforms should not only hire local content moderators but also consider developing a partnership program with local individuals and NGOs. Developing a local partnership program would create an effective communication channel wherein members of the local population could report hate and terrorist speech directly, thereby enabling social media content moderators to address harmful content and mitigate potential damage more quickly.
Continue reading Social Media—A Tool for Terror? →