Tag Archives: Social Media

TikTok v. Trump and the Uncertain Future of National Security-based Restrictions on Data Trade

In recent years, foreign bulk data collection of US citizens’ personal data has emerged as a new and increasing national security threat. The ability of foreign adversaries to collect—and in some cases, buy outright—US person data is officially governed by IEEPA and CFIUS. Bernard Horowitz and Terence Check argue that these regulatory frameworks are ill-suited for the particular issues raised by present-day data processing technology.

The authors examine both IEEPA and CFIUS in turn—how these regulations function in practice, and how they apply to bulk adversarial data collection. The authors focus particularly on the recent decision in TikTok v. Trump and how it may undermine the ability of the United States to restrict data trade on national security grounds.

Bernard Horowitz is Law Clerk for Senior Judge Mary Ellen Coster Williams of the United States Court of Federal Claims. This article does not reflect the views of the Court of Federal Claims or Judge Williams, and was written solely in the author’s personal capacity and not as part of his court-related duties.

Terence Check is Senior Counsel, Cybersecurity and Infrastructure Security Agency, Department of Homeland Security; LL.M in Law & Government, specializing in National Security Law & Policy, American University Washington College of Law (2015); J.D., magna cum laude, Cleveland State University, Cleveland-Marshall College of Law (2014); Editor-in-Chief, Cleveland State Law Review (2013-2014). This article does not reflect the official position of the US government, DHS, or CISA and all opinions expressed are solely those of the authors. He is the author of “Turning US Vetting Capabilities and International Information-sharing to Counter Foreign White Supremacist Terror Threats” in the JNSLP Online Supplement.

 

A Multiverse of Metaverses

By Sadev Parikh

Eric Ravenscraft’s Wired article shows us the difficulty of defining the “metaverse,” which may be better understood through the lens of Wittgenstein’s idea of family resemblances than through any attempt at clear-cut definition. Metaverse can be seen as a concept made up of family resemblances that include elements of virtual reality, augmented reality, and haptic feedback. While these technical elements may ground the concept, various metaverses could vary along parameters such as the centralization of power, financialization, and degree of anonymity for users. Armed with this framework, we might predict how the metaverse may manifest in the United States.

Considering centralization of power, we see two competing visions: one concentrated around Facebook (i.e., Meta), and the vision of a “Web 3” that might include worlds like Decentraland built around principles of decentralized decision-making and power enabled by blockchain technology.

A Facebook-driven metaverse could become the dominant mode, simply through its incumbent network effects and persistence as a premier destination for advertisers, as well as customer lock-in stemming from adjacent services (such as Messenger, Groups) that are increasingly essential to participating in modern life. The “Future Threats to Digital Democracy” report captures internet harms directly tied to the influence of Facebook and its business model on the internet.

Digitally impaired cognition is driven by social media content algorithms “engineered for virality, sensationalism, provocation and increased attention.” Reality apathy comes from the diffusion of re-shared negative content that is upranked by Facebook’s algorithms. It’s easy to imagine that a Facebook-driven metaverse is therefore likely to replicate the same features given Facebook’s need to monetize.

Only now, Facebook’s paradigm may disintermediate not only our cognitive lives via smartphones but also our physical interactions, from the mundane like work meetings to even intimate moments like hugging enabled by haptic feedback suits. That said, perhaps Libra’s failure and Facebook’s February stock plummet portend a future where Mark Zuckerberg’s dreams no longer translate inevitably to our reality.

Continue reading A Multiverse of Metaverses

Social Media—A Tool for Terror?

By Ruhi Kumar

The prevalence of terrorist organizations using social media generates a host of new challenges for online platforms, policymakers, and governments. Specifically, the global, highly accessible, and fast-evolving nature of social media provides a particularly lucrative platform for terrorist organizations to promote their ideologies. While there is a growing demand for responsible and accountable online governance, the lack of effective content moderation policies, transparency, and cultural understanding continues to facilitate harmful content on social media platforms. To meaningfully tackle these issues, it is crucial that national governments and lawmakers consider a combination of policy and legislative solutions.

Although the terms of service of many leading social media companies stipulate that terrorist content is forbidden, the lack of effective content moderation processes fails to effectively turn policy into practice. For instance, Facebook’s Community Standards state that organizations that are engaged in terrorist activity are not allowed on the platform; however, what is classified as ‘terrorist content’ under Facebook’s policy is a highly subjective question under which the platform is given complete discretion. Additionally, “by its own admission, Facebook continues to find it challenging to detect and respond to hate speech content across dynamic speech environments, multiple languages, and differing social and cultural contexts.”

For instance, in Myanmar, the lack of content moderators who speak local languages and understand the relevant cultural contexts has allowed for terrorist content to proliferate. According to a United Nations investigation, Facebook’s platform was utilized to “incite violence and hatred against” ethnic minorities in Myanmar, leading to over 700,000 members of the Rohingya community fleeing the country due to a military crackdown.

Despite being aware of these repercussions, Facebook neglected to deploy the necessary resources to combat hate speech as at the time there were only two Burmese speakers employed at Facebook who were tasked with reviewing problematic posts. Hence it can be argued that in some of the world’s most volatile regions, terrorist content and hate speech escalate because social media platforms fail to employ the necessary resources to moderate content written in local languages.

In Myanmar, this lack of policy oversight caused inflammatory content to flourish and harm local minority populations. To address this issue, social media platforms should not only hire local content moderators but also consider developing a partnership program with local individuals and NGOs. Developing a local partnership program would create an effective communication channel wherein members of the local population could report hate and terrorist speech directly, thereby enabling social media content moderators to address harmful content and mitigate potential damage more quickly.

Continue reading Social Media—A Tool for Terror?