Tag Archives: Facebook

A Multiverse of Metaverses

By Sadev Parikh

Eric Ravenscraft’s Wired article shows us the difficulty of defining the “metaverse,” which may be better understood through the lens of Wittgenstein’s idea of family resemblances than through any attempt at clear-cut definition. Metaverse can be seen as a concept made up of family resemblances that include elements of virtual reality, augmented reality, and haptic feedback. While these technical elements may ground the concept, various metaverses could vary along parameters such as the centralization of power, financialization, and degree of anonymity for users. Armed with this framework, we might predict how the metaverse may manifest in the United States.

Considering centralization of power, we see two competing visions: one concentrated around Facebook (i.e., Meta), and the vision of a “Web 3” that might include worlds like Decentraland built around principles of decentralized decision-making and power enabled by blockchain technology.

A Facebook-driven metaverse could become the dominant mode, simply through its incumbent network effects and persistence as a premier destination for advertisers, as well as customer lock-in stemming from adjacent services (such as Messenger, Groups) that are increasingly essential to participating in modern life. The “Future Threats to Digital Democracy” report captures internet harms directly tied to the influence of Facebook and its business model on the internet.

Digitally impaired cognition is driven by social media content algorithms “engineered for virality, sensationalism, provocation and increased attention.” Reality apathy comes from the diffusion of re-shared negative content that is upranked by Facebook’s algorithms. It’s easy to imagine that a Facebook-driven metaverse is therefore likely to replicate the same features given Facebook’s need to monetize.

Only now, Facebook’s paradigm may disintermediate not only our cognitive lives via smartphones but also our physical interactions, from the mundane like work meetings to even intimate moments like hugging enabled by haptic feedback suits. That said, perhaps Libra’s failure and Facebook’s February stock plummet portend a future where Mark Zuckerberg’s dreams no longer translate inevitably to our reality.

Continue reading A Multiverse of Metaverses

Social Media—A Tool for Terror?

By Ruhi Kumar

The prevalence of terrorist organizations using social media generates a host of new challenges for online platforms, policymakers, and governments. Specifically, the global, highly accessible, and fast-evolving nature of social media provides a particularly lucrative platform for terrorist organizations to promote their ideologies. While there is a growing demand for responsible and accountable online governance, the lack of effective content moderation policies, transparency, and cultural understanding continues to facilitate harmful content on social media platforms. To meaningfully tackle these issues, it is crucial that national governments and lawmakers consider a combination of policy and legislative solutions.

Although the terms of service of many leading social media companies stipulate that terrorist content is forbidden, the lack of effective content moderation processes fails to effectively turn policy into practice. For instance, Facebook’s Community Standards state that organizations that are engaged in terrorist activity are not allowed on the platform; however, what is classified as ‘terrorist content’ under Facebook’s policy is a highly subjective question under which the platform is given complete discretion. Additionally, “by its own admission, Facebook continues to find it challenging to detect and respond to hate speech content across dynamic speech environments, multiple languages, and differing social and cultural contexts.”

For instance, in Myanmar, the lack of content moderators who speak local languages and understand the relevant cultural contexts has allowed for terrorist content to proliferate. According to a United Nations investigation, Facebook’s platform was utilized to “incite violence and hatred against” ethnic minorities in Myanmar, leading to over 700,000 members of the Rohingya community fleeing the country due to a military crackdown.

Despite being aware of these repercussions, Facebook neglected to deploy the necessary resources to combat hate speech as at the time there were only two Burmese speakers employed at Facebook who were tasked with reviewing problematic posts. Hence it can be argued that in some of the world’s most volatile regions, terrorist content and hate speech escalate because social media platforms fail to employ the necessary resources to moderate content written in local languages.

In Myanmar, this lack of policy oversight caused inflammatory content to flourish and harm local minority populations. To address this issue, social media platforms should not only hire local content moderators but also consider developing a partnership program with local individuals and NGOs. Developing a local partnership program would create an effective communication channel wherein members of the local population could report hate and terrorist speech directly, thereby enabling social media content moderators to address harmful content and mitigate potential damage more quickly.

Continue reading Social Media—A Tool for Terror?