What is Machine Ethics?

What does it mean for a machine to have ethics? Is that even possible? Why would we care if a machine is ethical or not, as long as it does what we tell it to do?

There are a variety of positions on each of these questions. The two extreme positions on whether a machine can be ethical are as follows:

  • Machines can never have ethics, since machines are bereft of emotions and emotions are integral to ethics.
  • Humans are not special. It is entirely possible that some machines could have a sense of ethics.

The first position essentially contends that there is something special about human beings which gives us a sense of ethics. According to this position, human beings have free-will, intentionality and consciousness, all of which are pre-requisites for ethics. If you agree with this position, then there is no more debate to be had. Since machines, even autonomous ones do not have intentionality or consciousness, they can never be said to have any sense of ethics. The trouble with this position is these foundational terms are not well-defined. What exactly is consciousness? Even those in the I-know-it-when-I-see-it camp can only possibly know that they themselves are conscious, but can never really tell if another human being is conscious.

The second position contends that it is irrelevant whether a machine is human-like or not, but it may still have the ability to be ethical and can be evaluated on that question. James H. Moor in his The Nature, Importance and Difficulty of Machine Ethics1 contends that there are four different degrees of ethical evaluations for autonomous agents:

  1. Ethical Impact Agents – Autonomous machines or robots or software that has an ethical impact on the world it interacts with. He uses the example of usage robot jockeys for camel racing in Qatar. Having robot jockeys decreases the demand for children being trafficked from poorer countries to be camel jockeys, thus having an ethical impact.
  2. Implicit Ethical Agents – This refers to software / hardware this is created to follow some ethical principles, but does not actually reason about ethics itself. Thus, any banking software that ensures that banking rules are followed is an implicit ethical agent. It is implicitly ethical because the creators of the software programmed it to behave in an ethical manner, and not cheat people out of money. However, the software itself cannot reason about its behaviour.
  3. Explicit Ethical Agents – These are software / hardware that have some ethical principles programmed into them, and the agents use these principles to choose from several alternative actions, autonomously. For example, Michael Anderson, Susan Anderson and Chris Armen discuss two ethical theories that they programmed into a robot, but it (the robot) is far from being called an ethical agent. They contend that regardless of the efficacy (completeness, soundness, robustness) of the theory, it could be better to have ethical principles programmed into robots than not. This is so because machines can at least be consistent, which is lacking in human beings.
  4. Full Ethical Agents – Adult human beings are usually considered full ethical agents, because not only do they (presumably) act ethically, but they can also provide a justification for why they acted the way they did. This is usually the most contentious part of machines with ethics. Some people believe that only full ethical agents can be called ethical agents at all.

Whether we consider machines to have consciousness, free-will and intentionality or not, we will have to concede that as machines become more and more autonomous, they need to have some mechanism to ensure that they behave ethically towards human beings. Nick Bostrom created his now-famous example of a super-intelligence that consumes all of the planet’s resources trying to make more and more paperclips. Given that we may not be able to thwart such a super-intelligence, it might be only ethical considerations that prevent the super-intelligence from drowning the planet in paperclips.

 

1.
Moor JH. The Nature, Importance, and Difficulty of Machine Ethics. I. 2006;21(4):18-21. doi:10.1109/mis.2006.80

Leave a Reply

Your email address will not be published. Required fields are marked *