The Ethics of AI in Modern Warfare: Balancing Innovation with Moral Accountabilityby@nimit
586 reads
586 reads

The Ethics of AI in Modern Warfare: Balancing Innovation with Moral Accountability

by NimitApril 6th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

This article delves into the ethical dilemmas posed by the use of AI in warfare, examining the implications of autonomous weapons systems and their impact on military strategies. It weighs the benefits of increased precision and reduced human casualties against the moral concerns of delegating life-and-death decisions to machines, and the challenges of adhering to Just War Theory principles. The piece also discusses international efforts for regulation and accountability in AI-driven military applications.
featured image - The Ethics of AI in Modern Warfare: Balancing Innovation with Moral Accountability
Nimit HackerNoon profile picture

Artificial Intelligence (AI) continues to develop as a transformative force across many spheres of life, already beginning to revolutionize industries and reshape the way we live and work. The topic of AI in warfare will require increasing attention from governments, policymakers, and international organizations. A large part of this is due to significant advancements in the development of autonomous weapons systems (AWS), which use algorithms to operate independently and without human supervision on the battlefield. More broadly, AI in its many forms has the potential to enhance a range of military activities, from the likes of robotics and weaponry to intelligence gathering and decision-making.

With such a diversity of potential applications comes a unique set of ethical dilemmas. The benefits of AI in warfare are increased precision, reduced human casualties, and even deterrence against entering into armed conflict in the first place akin to the threat of nuclear war. However, this would mean giving machines the ability to make deliberate life-and-death decisions, blurring the lines of accountability, and possibly going against the fundamental principles of morality in warfare.

A Brief Overview of AI in Warfare

As the Stockholm International Peace Research Institute outlines, AI has become a crucial part of military strategies and budgets, contributing to the wider ‘arms race’[1]. Combined with the likes of nuclear and atomic threats, geopolitics must therefore question the ethics of the continued weaponization of technology. Some believe that these advancements will ultimately lead to zero-sum thinking dominating world politics. This logic is not new; Alfred Nobel hoped the destructive power of dynamite would put an end to all wars[2].

AI has already started to be incorporated into warfare technology such as in drone swarms, guided missiles, and logistical analysis. Autonomous systems have been incorporated into defensive weaponry for even longer, e.g. antivehicle and antipersonnel mines. Future developments will continue to aspire to increasing levels of autonomy. The US is testing AI bots that can self-fly a modified version of the F-16 fighter jet; Russia is testing autonomous tanks; and China too is developing its own AI-powered weapons[3].

The goal is to protect human life by continuing to mechanize and automate battlefields. “I can easily imagine a future in which drones outnumber people in the armed forces pretty considerably[3]” said Douglas Shaw, senior advisor at the Nuclear Threat Initiative. So, instead of soldiers being deployed on the ground we saved lives by putting them in planes and arming them with missiles. Now with AI, militaries hope to spare even more human life from its forces.

Moral Implications of AI in Warfare

This sounds great so far. Save lives by using AI to direct drones. Save lives by using AI to launch missiles. The difference between this technological jump in warfare and past innovations is the lack of human input in decision-making. With AWS and lethal autonomous weapons systems (LAWS), we are handing the power to kill a human being over to an algorithm that has no intuitive humanity.

Several ethical, moral, and legal issues arise here.

Is it fair that human life should be taken in war without another human being on the other side of that action? Does the programmer of an algorithm in a LAWS have the same responsibility in representing their country as a fighter pilot, and/or the same right to contribute to taking enemy life?

Like with the ethical dilemmas surrounding autonomous vehicles[4], is it morally justifiable to delegate life-and-death decisions to AI-powered algorithms? From a technological point of view, this will come to depend in part on the transparency of the programming of AWS: training, datasets used, coded preferences, and errors like bias, in these models. Even if we reach an adequate level of accuracy and transparency, should AWS and LAWS be considered moral in warfare?

Moral Implications of Just War Theory

Just War Theory, credited to St Augustine and Thomas Aquinas in the 13th century[5], evaluates the morality of warfare and ethical decision-making in armed conflict. Across guidelines for jus ad bellum (justice of war) and jus in bello (justice in war), the most notable considerations are:

  • Proportionality: The use of force must be proportional to the objective being pursued and must not cause excessive harm or suffering relative to the anticipated benefits.
  • Discrimination: Also known as non-combatant immunity, this principle requires that combatants distinguish between combatants and non-combatants, and only target the former while minimizing harm to the latter.

It could be argued that the use of AI-powered weapons and LAWS do not guarantee adherence to these conventions.

On proportionality, AI-backed weaponry would possess the ability to deliver force with greater speed, power, and precision than ever before. Would this level of force necessarily match the threat posed/military objective, especially if used against a country with less technologically advanced weaponry? Similarly, what if a LAWS is fed erroneous intel, or hallucinates and creates an inaccurate prediction? This could lead to the formation and execution of unnecessary military force and disproportionate actions.

On the point of discrimination, these technologies are not 100% accurate. When firing a missile at an enemy force, what happens if facial recognition[6] technologies cannot distinguish civilians from combatants? This would undermine the moral distinction between legitimate military targets and innocent bystanders.

Case Study

A Panel of UN Experts reported the possible use of a LAWS - STM Kargu-2 - in Libya in 2020, deployed by the Turkish military against the Haftar Affiliated Forces (HAF)[7]. Described as being “programmed to attack targets without requiring data connectivity between the operator and the munition” [8], the drone units were eventually neutralized by electronic jamming. The involvement of this remote air technology though changed the tide for what had previously been “a low-intensity, low-technology conflict in which casualty avoidance and force protection were a priority for both parties”[7].

While causing significant casualties, it is not clear whether the unmanned attack drones caused any fatalities[8]. Still, it highlights issues with unregulated, unmanned use of combat aerial vehicles and drones.

HAF units were not trained to defend against this form of attack, had no protection from the aerial attacks (which occurred despite the drones being offline), and even in retreat continued to be harassed by the LAWS. This alone begins to breach the principle of proportionality, and even more so when considering that the STM Kargu-2s changed the dynamic of the conflict. Reports go so far as to suggest that “the introduction by Turkey of advanced military technology into the conflict was a decisive element in the… uneven war of attrition that resulted in the defeat of HAF in western Libya during 2020”[7].

International Cooperation and Regulation of AI in Military Applications

Since 2018, the UN Secretary-General António Guterres has maintained that LAWS are both politically and morally unacceptable[9]. In his 2023 New Agenda for Peace, Guterres has called for this to be formalized and actioned by 2026. Under this, he suggests a complete ban on the use of AWS which functions without human oversight and does not comply with international law, and regulation of all other AWS.

This type of international cooperation and regulation will be necessary to help overcome the ethical concerns we have discussed. For now, the use of AWS without human oversight will cause the most immediate issues. The lack of a human decision-maker creates issues of responsibility. Without a chain of command who takes responsibility for the malfunctioning or general fallibility of an AI-powered system?

Moreover, there would be an ensuing lack of accountability. Especially in traditional warfare where there are defined moral principles like the Just War Theory, here there would be no culpable agent for actions taken by autonomous systems.

Finally, while there are benefits to increasingly adopting AI in military applications, how these technologies end up being used will define whether it becomes a utopic solution or a proliferation of the already politically destabilizing arms race.

Therefore, continued discussion around international, legally binding frameworks for ensuring accountability in AI warfare will arguably be one of the most crucial areas of AI regulation in the near future.


  1. Weaponizing Innovation: Mapping Artificial Intelligence-enabled Security and Defence in the EU
  2. War and Technology - Foreign Policy Research Institute
  3. How AI Will Revolutionize Warfare
  4. Ethical AI and Autonomous Vehicles: Championing Moral Principles in the Era of Self-Driving Cars | HackerNoon
  5. Just War Theory | Internet Encyclopedia of Philosophy
  6. Bias in Facial Recognition Tech: Explore How Facial Recognition Systems Can Perpetuate Biases | HackerNoon
  7. Libya, The Use of Lethal Autonomous Weapon Systems | How does law protect in war? - Online casebook
  8. The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions - Lieber Institute West Point
  9. Lethal Autonomous Weapon Systems (LAWS) – UNODA