The Moral Robot; 14 Moral Rules for Machines

Written by AlextheYounger | Published 2018/04/17
Tech Story Tags: artificial-intelligence | philosophy | programming | morality | law

TLDRvia the TL;DR App

I’m more worried about an artificial imbecile than I am an artificial intelligence. Although, compared to a calculator we are hopelessly outmatched in basic computation, we seem to be unrivaled in our ability to evaluate decisions and their impacts over the long run. The human brain is still the most complex machine in the known universe; it may take another millennium before we can accurately reconstruct it. Humans excel in moral decision making, something machines severely struggle with.

Using my recent series of articles on reviving the study of ethics, in a very Asimov-like style, I have picked the most crucial 14 general moral rules. This list will surely be expanded but I think this is at least an excellent starting point for how we should eventually explain morality to a machine. In this article, I delve into the process of creating a moral general intelligence and explain the importance of creating a practical top-down theory of morality. The 14 rules appear at the end of this article.

What is Morality?

Over generations, society can learn practical behaviors and even accept that everyone should follow certain rules, even if they don’t consciously understand why they should follow them. Most people accept that there are moral rules, but few understand why everyone should follow them. Ask 10 people why they should be good and odds are you’ll receive 10 different answers.

Ethics is the science of practicality in human action, of choosing actions that will better ourselves and mankind in the long-run. Moral intuition likely arose because cooperation is a very logical thing. You should follow general moral rules because your interests are best achieved through common means — and equally, when others allow you the opportunity to achieve them. Morality can best be described as putting aside our human, short-sighted, immediate interests to achieve our greater, long-term interest of cooperation.

Some readers may question whether to call ethics a science but this is largely semantics. It resembles a debate of whether to call engineering a science. Ethics bears the same relationship to psychology as engineering does to mechanics and physics. The function of these fields is to deal in a systematic way with a class of problems that need to be solved. If by science, we mean a rational inquiry aiming to arrive at a systematized body of deductions and conclusions, then ethics is a science.

Human Morality vs Machine Morality

A question worth asking is whether different kinds of intelligences require different moral rules. For instance, under most circumstances, it would be harmful to force a man to work against his will. However, a machine’s will is under our direct control. Whatever intentions we have for the machine, the machine will likely view these as its interests. Programmed in the right way, it doesn’t seem like we could ever deprive it of anything. However, it will still need to understand how to cooperate with humans and its interests must coincide with our moral rules.

The Importance of the Moral Robot

Scientific and business leaders like Elon Musk and Max Tegmark have invested millions of dollars into what they call the Future of Life Institute, an organization dedicated to the creation of safe AI. There is a wide range of possible intelligences. If we’re not careful, it is possible that we could create the most selfish being ever conceived. For more interesting information about possible AI failures, I highly recommend the work of Robert Miles who often appears on the Computerphile Youtube channel.

How Do We Explain to a Machine What is Good?

The creation of a general intelligence will be a painstaking process of exposing the intelligence to millions of real-life scenarios. When we teach a machine to play a game like Pac-Man or Go, we give the machine a simple goal of obtaining the most amount of points and then allow it to learn through thousands of trials. Eventually, it develops the most efficient algorithm for gaining the maximum amount of points. To my knowledge, this point-based system is our only practical way of explaining goals to machines. I haven’t heard of any other way to describe goals to computational machine. We will likely have to translate our moral rules into some form of point-based system.

Another difficult process will be explaining intuitive human concepts to a machine. The simplest ideas are often taken for granted, and consequently the hardest to precisely define. For example, what exactly does it mean to be human? This term may need pages of clarification as to avoid scenarios where the intelligence no longer considers you human if, say, you’ve lost an arm. Even mankind doesn’t precisely agree about what a human actually is; simply point yourself to the nearest abortion debate. We will need to settle these arguments so that a machine doesn’t process the world in ways that may be harmful to us.

Top-Down and Bottom-Up Theories of Morality

What I have attempted to create is a top-down theory of morality, (for example, Asimov’s Three Laws, Kant’s Categorical Imperative, The Felicific Calculus). I believe a top-down theory of morality is a necessary guide for any bottom-up experiential learning. As the intelligence is faced with each moral problem it needs guiding principles, or it may simply find the most efficient way of solving each problem, which may not be the most practical way.

I consulted a peer-reviewed paper by Wendell Wallach, Stan Franklin, and Colin Allen discussing an AGI model of cognition called LIDA, and the authors also express an eventual need for a top-down theory:

“Each [top-down theory] is susceptible to some version of the frame problem — computational load due to the need for knowledge of human psychology, knowledge of the affects of actions in the world, and the difficulty in estimating the sufficiency of initial information.” (pg. 5)

“Eventually, there will be a need for hybrid systems that maintain the dynamic and flexible morality of bottom-up systems, which accommodate diverse inputs, while subjecting the evaluation of choices and actions to topdown principles that represent ideals we strive to meet.” (pg. 6)

The 14 Human, General Moral Rules

Each one of these rules will require pages of clarification, as will any top-down theory of morality. We will also need extensively thought-out programs that allow for the calculation of terms like interests, long-run and short-run, which are terms that appear quite often in these rules. I have attached 6 corresponding articles to certain rules that will help to explain the rationale behind the rule.

  1. All humans desire the freedom to pursue their self-interests. (article 2)
  2. All humans share the common long-term interest of social cooperation, because it is in the interest of any human that other humans allow or aid the pursuit of its own interests. (article 1)
  3. A good action is an action that contributes to the realization of an individual’s self-interests and/or the realization of other individuals’ self-interests. (article 2)
  4. A bad action is an action that contributes to no individual’s self-interests, or contributes to an individuals self interests at the expense of other individuals’ self-interests. (article 2)
  5. To maximize the freedom of all humans, it is often necessary for humans to set aside varying, short-term interests to achieve a higher-valued, long-term interest of social cooperation. (article 2)
  6. In each moral problem, humans should choose the action or rule of action that allow for the realization of the greatest possible number of interests for the individual human in the long run. (article 1)
  7. If there is conflict, humans should choose actions or rules of action that will allow the greatest possible number of interests for the greatest possible number of individual humans. (article 1)
  8. Actions or rules of action that can satisfy all humans in the community or all of mankind are significantly more valuable than actions or rules of action that simply aim to satisfy the majority.
  9. No matter how unequal the respective members of society are in wealth, talents or abilities, it is in the greatest interest of each human that the incentives of all humans to contribute to society is maximized. (article 6)
  10. Humans value the creation or preservation of individual interests to a much higher degree than the elimination of interests. (For humans, suffering is unequally worse than happiness is beneficial).
  11. Although it may be good to contribute to a good cause, it is possible for mandatory goodness to decrease social cooperation. (article 4, article 5)
  12. The practical purpose of human law is to create general rules that maximize freedom in the long run. A just law is a general rule that limits the incentives to act in an impractical way in the pursuit of creating more freedom for all individuals in the long-run. (article 6)
  13. Established human law has errors but it is more practical to peacefully abide by unjust law than to risk disrupting social cooperation by aggressively acting to change unjust law. (article 3)
  14. Under human law, every human counts for one, and no human counts for more than one. Regardless of circumstance, no human’s interests are considered more important than the interests of another under the law. (article 6)

Please help by critiquing these rules or helping to expand this list. Improvements are welcome.


Published by HackerNoon on 2018/04/17