Category Archives: Online Supplement

Social Media—A Tool for Terror?

By Ruhi Kumar

The prevalence of terrorist organizations using social media generates a host of new challenges for online platforms, policymakers, and governments. Specifically, the global, highly accessible, and fast-evolving nature of social media provides a particularly lucrative platform for terrorist organizations to promote their ideologies. While there is a growing demand for responsible and accountable online governance, the lack of effective content moderation policies, transparency, and cultural understanding continues to facilitate harmful content on social media platforms. To meaningfully tackle these issues, it is crucial that national governments and lawmakers consider a combination of policy and legislative solutions.

Although the terms of service of many leading social media companies stipulate that terrorist content is forbidden, the lack of effective content moderation processes fails to effectively turn policy into practice. For instance, Facebook’s Community Standards state that organizations that are engaged in terrorist activity are not allowed on the platform; however, what is classified as ‘terrorist content’ under Facebook’s policy is a highly subjective question under which the platform is given complete discretion. Additionally, “by its own admission, Facebook continues to find it challenging to detect and respond to hate speech content across dynamic speech environments, multiple languages, and differing social and cultural contexts.”

For instance, in Myanmar, the lack of content moderators who speak local languages and understand the relevant cultural contexts has allowed for terrorist content to proliferate. According to a United Nations investigation, Facebook’s platform was utilized to “incite violence and hatred against” ethnic minorities in Myanmar, leading to over 700,000 members of the Rohingya community fleeing the country due to a military crackdown.

Despite being aware of these repercussions, Facebook neglected to deploy the necessary resources to combat hate speech as at the time there were only two Burmese speakers employed at Facebook who were tasked with reviewing problematic posts. Hence it can be argued that in some of the world’s most volatile regions, terrorist content and hate speech escalate because social media platforms fail to employ the necessary resources to moderate content written in local languages.

In Myanmar, this lack of policy oversight caused inflammatory content to flourish and harm local minority populations. To address this issue, social media platforms should not only hire local content moderators but also consider developing a partnership program with local individuals and NGOs. Developing a local partnership program would create an effective communication channel wherein members of the local population could report hate and terrorist speech directly, thereby enabling social media content moderators to address harmful content and mitigate potential damage more quickly.

Continue reading Social Media—A Tool for Terror?

Deepfakes Perpetuating Disinformation in America

By Ruhi Kumar

In the report Deepfake, Cheapfake: The Internet’s Next Earthquake? DeepTrust Alliance describes the ‘portending serious consequences’ deepfakes have for society by highlighting the social, political and emotional toll deepfakes place on individuals, corporations and governments. As the issue of deepfakes permeates many aspects of society, legislators and policymakers have long struggled to come up with appropriate solutions and safeguards. Id.

Given the vast scope of this issue, it is pertinent that stakeholders adopt a collaborative and holistic solution be adopted to curb the use of deepfakes, only then will the deepfake misinformation be addressed in a meaningful way. To ensure long-term success in curbing deepfake misinformation a combination of technological tools and processes, legislative policy and consumer education campaigns should be adopted.

Deepfakes are a “potential new frontier of disinformation warfare” and misinformation that requires prompt policy action. Tom Dobber & Nadia Metoui, Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?, 26 The Int’l J. of Press/Pol. 71 (2020).  This has been particularly evident in political elections, since to an untrained eye, a deepfake may be difficult to distinguish from a legitimate video. For instance, in the 2020 Indian elections, the Delhi BJP partnered with a political communications firm to create campaigns utilizing deepfakes to sway a large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. These deepfakes were distributed across 5,800 WhatsApp groups in Delhi and reached approximately 15 million people. Id. Circumstances like this and many others prompt questions about the “legitimacy of democratic elections, the quality of public debate and the power of citizens”. Dobber & Metoui.

As such, in order to remedy the potential corrosive impact deepfakes could have on an already fragile political landscape governments should adopt legislation that aims to curb potential misinformation and ensure the safety of their citizens. Several states in the United States such as California and Texas have passed laws that criminalize the publishing and distributing of a deepfake videos that intend to influence the outcome of an election. While the enactment of these laws is a step in the right direction, it does little to create long term change given the vast and cross boarder nature of online platforms. Even with this newly enacted state legislation, victims continue to encounter hurdles in identifying the exact location of the deepfake creator.

Additionally, in many cases the creator of the deepfake may be located outside of the state’s jurisdiction making the legislation inapplicable, leaving consumers susceptible to misinformation and victims lacking adequate redress. In order to tackle this issue, legislators should adopt a federal approach which would allow for a more cohesive handling of deepfake cases and facilitate more impactful remedies for victims.

Continue reading Deepfakes Perpetuating Disinformation in America

Active Cyber Measures: Reviving Cold War Debunking and Deterrence Strategy

By Nicolas Aalberg

Department of Justice and National Intelligence Center reports on active cyber measures (ACMs) carried out by U.S. adversaries on social media display a staggering manipulation of American conversations, journalism, and electoral processes. Unlike Cold War active measures conducted through human intelligence (HUMINT) operations, creating or manipulating an online intelligence asset requires exponentially fewer resources and yields results with far greater scale. However, the U.S. responded to Cold War active measures through defensive counterintelligence and misinformation-debunking programs and through offensive, active HUMINT deterrents, and that same strategy can be used to combat ACMs today.

The Intelligence Community (IC) must work defensively using signals intelligence (SIGINT) and open-source intelligence (OSINT) to detect and neutralize enemy social media accounts, and Congress must create a bipartisan committee (the “Committee”) to communicate declassified information to the American public to expose manipulation of online conversations. At the same time, USCYBERCOM and CIA must work in tandem offensively through a new blend of cyber warfare and HUMINT to deter ACM proliferation and respond in kind, and once again set global military and intelligence standards on U.S. terms.

I.   Defensive Posture: Congress Must Create a Bipartisan Committee to Counter Active Cyber Measures

Given that U.S. adversaries are successfully laying siege to the fabric of American political conversations, the U.S. needs to adopt a Cold War-era defensive posture consisting of counterintelligence efforts and increased transparency with the electorate about manipulated conversations. Historically, CIA has collaborated with FBI on counterintelligence efforts to remove compromised and planted HUMINT assets. NSA, CIA, and the Office of the Director of National Intelligence (ODNI) must similarly identify active personas and botnets through a combination of SIGINT and OSINT and collaborate with the social media industry to remove these accounts.

Continue reading Active Cyber Measures: Reviving Cold War Debunking and Deterrence Strategy