Is International Humanitarian Law Keeping Up with Autonomous Weapons Development?

Artificial Intelligence (AI) has made significant advancements across various sectors, including the military. The integration of AI in military technologies has given rise to autonomous weapons systems, sparking concerns about their adherence to international humanitarian law (IHL).

The utilization of autonomous weapons systems, also referred to as Lethal Autonomous Weapons Systems (LAWS), has been an escalating trend in military technology since the 1900s. These advanced weapons harness AI technologies to detect and engage targets with minimal human intervention. The development of autonomous submarines, unmanned combat air vehicles, and AI-equipped tanks attests to the growing complexity of AI in military technology.

According to IHL, which governs armed conflict, the implementation of autonomous weapons systems presents significant challenges. A primary concern pertains to the systems’ ability to distinguish between civilian and military targets. Some argue that semi or fully autonomous weapons may lack the capacity to make this vital distinction, potentially resulting in civilian casualties. The utilization of antipersonnel autonomous mines in the past, which failed to differentiate between civilians and military personnel, led to their prohibition under the Ottawa Treaty 1997.

The Marten’s Clause, a component of the Geneva Convention, holds a crucial role in addressing new and emerging technologies in warfare. It necessitates IHL’s control and anticipation of weapons that contravene fundamental aspects of the law. For instance, the prohibition of ‘laser blinding’ autonomous weapons in 1990 was motivated by ethical concerns associated with human rights violations.

The expansion of the lethal autonomous weapons systems market has prompted calls for an international ban on these technologies. Critics argue that the inability of IHL to regulate and restrict the use of LAWS undermines its objectives of safeguarding civilians and upholding the principles of humanity. The ethical implications of entrusting machines to make life or death decisions, particularly in warfare, remain a contentious issue.

Some legal scholars advocate for a proactive governance approach to address the development and implementation of LAWS. This approach would afford IHL the ability to regulate weapons at the development stage, potentially preventing humanitarian crises. However, debates persist regarding the jurisdiction of IHL and its role in shaping the policies and decisions of nations developing such technologies.

The rapid pace of technological advancement is surpassing the rate at which international law is adapting to these progressions. The influence of powerful nations on decision-making processes, and their vested interests in military advancements, further complicates the implementation of potentially life-saving laws. Nations such as the UK, with a strong emphasis on innovation in AI weaponry, are hesitant to impose restrictions that could impede progress.

The escalating advancement of AI and autonomous weapons systems raises concerns about whether international humanitarian law is adequately equipped to regulate these technologies. As AI continues to progress, along with autonomous weapons, the law may encounter challenges in keeping pace with technological advancements.

In conclusion, the proliferation of AI in military technology, particularly in the development of autonomous weapons systems, has emphasized the necessity to address their compliance with international humanitarian law. As the legal and technological landscapes continue to evolve, it remains to be seen whether the law can effectively moderate the impact of AI in armed conflict.

Harriet Hunter, a first-year LLB (Hons) student at the University of Central Lancashire, possesses a keen interest in criminal law and the legal implications of technology, particularly AI.