Tag Archives: Artificial Intelligence

Deepfakes Perpetuating Disinformation in America

By Ruhi Kumar

In the report Deepfake, Cheapfake: The Internet’s Next Earthquake? DeepTrust Alliance describes the ‘portending serious consequences’ deepfakes have for society by highlighting the social, political and emotional toll deepfakes place on individuals, corporations and governments. As the issue of deepfakes permeates many aspects of society, legislators and policymakers have long struggled to come up with appropriate solutions and safeguards. Id.

Given the vast scope of this issue, it is pertinent that stakeholders adopt a collaborative and holistic solution be adopted to curb the use of deepfakes, only then will the deepfake misinformation be addressed in a meaningful way. To ensure long-term success in curbing deepfake misinformation a combination of technological tools and processes, legislative policy and consumer education campaigns should be adopted.

Deepfakes are a “potential new frontier of disinformation warfare” and misinformation that requires prompt policy action. Tom Dobber & Nadia Metoui, Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes?, 26 The Int’l J. of Press/Pol. 71 (2020).  This has been particularly evident in political elections, since to an untrained eye, a deepfake may be difficult to distinguish from a legitimate video. For instance, in the 2020 Indian elections, the Delhi BJP partnered with a political communications firm to create campaigns utilizing deepfakes to sway a large Haryanvi-speaking migrant worker population in Delhi from voting for the rival political party. These deepfakes were distributed across 5,800 WhatsApp groups in Delhi and reached approximately 15 million people. Id. Circumstances like this and many others prompt questions about the “legitimacy of democratic elections, the quality of public debate and the power of citizens”. Dobber & Metoui.

As such, in order to remedy the potential corrosive impact deepfakes could have on an already fragile political landscape governments should adopt legislation that aims to curb potential misinformation and ensure the safety of their citizens. Several states in the United States such as California and Texas have passed laws that criminalize the publishing and distributing of a deepfake videos that intend to influence the outcome of an election. While the enactment of these laws is a step in the right direction, it does little to create long term change given the vast and cross boarder nature of online platforms. Even with this newly enacted state legislation, victims continue to encounter hurdles in identifying the exact location of the deepfake creator.

Additionally, in many cases the creator of the deepfake may be located outside of the state’s jurisdiction making the legislation inapplicable, leaving consumers susceptible to misinformation and victims lacking adequate redress. In order to tackle this issue, legislators should adopt a federal approach which would allow for a more cohesive handling of deepfake cases and facilitate more impactful remedies for victims.

Continue reading Deepfakes Perpetuating Disinformation in America

Layered Opacity: Criminal Legal Technology Exacerbates Disparate Impact Cycles and Prevents Trust

Predictive policing tools used widely by law enforcement agencies attempt to identify where crime will happen before it does. These analyses determine police deployment, and ultimately, arrest data. In this article, Ben Winters highlights how risk assessment tools use that data, combined with various other inputs, to determine detention, bail, sentencing, parole, and more which give rise to serious transparency and oversight concerns.

Particularly, Winters highlights the urgency of these paramount concerns given the tool’s operation in a system that severely disadvantages already marginalized communities. Winters argues that the relatedness of the tools is under-recognized and could be stronger reflected in advocacy efforts and regulatory efforts. This article explains the harm compounded by the tools and explores regulatory options both inside of traditional government levers, and the approaching regulation of data and data practices.

Machine Learning, Artificial Intelligence, and the Use of Force by States

Taking an international law perspective, Ashley Deeks, Noam Lubell, and Daragh Murray highlight the potential legal, policy, and ethical challenges that will arise as governments inevitably begin to employ artificial intelligence and machine learning algorithms to inform their use of force decisions. The authors identify critical questions states should contemplate before developing such algorithms, underscoring that machine learning algorithms could both improve the accuracy of use of force decision making and present negative consequences for states, and recommend prophylactic measures for states as they develop and eventually deploy these tools.