In the relatively near future, the United States and other countries are likely to develop varying levels of artificial intelligence (AI) and integrate it into autonomous weapons. There are significant voices, spearheaded by The Campaign to Ban Killer Robots, advocating for a preemptive ban on these weapons.
The opponents of lethal autonomous weapon systems (LAWS) argue that it is unethical to allow a machine to decide when to kill and that AI will never be able to adhere to International Humanitarian Law (IHL) obligations. These opposition campaigns have led to discussions in the international community about developing a legal framework for LAWS. While a requirement for meaningful human control (MHC) has gained traction within certain UN bodies, the United States has objected to the use of the standard, arguing that such an ambiguous standard would further obscure the challenges posed by LAWS.
Maj. Matthew Miller’s article seeks to provide a solution to the ambiguity of MHC and provide a workable definition of the standard. Miller reviews the ways humans can interact with autonomous systems, examining the ways humans are placed in a system’s decision loop, and relevant provisions of IHL to LAWS. Miller ultimately uses the lens of command responsibility to demonstrate how MHC can be applied to the design and use of LAWS, ultimately concluding that this method can address concerns that the use of LAWS will prevent accountability for IHL violations.
Major Matthew Miller is a Judge Advocate in the U.S. Army and currently serves as the Chief of the Operational Law Branch in the National Security Law Division of the Army’s Office of The Judge Advocate General. Major Miller holds a Master of Laws (LL.M) in National Security Law from the Georgetown Law Center and an LL.M. in Military Law from The Judge Advocate General’s Legal Center and School. The views expressed
in the paper are the author’s alone and do not necessarily reflect those of the author’s employer.