Can Machines be Moral Agents?

The question of whether a machine can ever be a moral agent, i.e., be an agent that can be considered morally responsible, is dependent on which side of the philosophical debate one falls on. Normally, we consider human beings to be moral agents, but technology created by humans to be morally neutral. That is, a knife is not a moral entity and no one expects to reprimand the knife if it cuts one’s hand or praise it if it slices through vegetables. The human wielding the knife is usually responsible for the actions done, using the knife. Thus, the knife’s effect on the world, whether positive or negative, is not the responsibility of the knife, even though it is clearly the consequence of the knife. Does this mean that only human beings are morally responsible, for any consequence of technological objects in the environment? And are any artificial artifacts exempt from morality? Clearly, this is not entirely black-and-white. For instance, infants, and children are usually not expected to be morally responsible for their actions, while artificial persons (such as corporations) are. So, the natural-ness or artificiality is not the sole determining criteria.

Regardless of which ethical theory one believes in, most philosophers seem to agree that there are three properties to be examined while determining if an agent is morally responsible or not:

  1. Autonomy – This is the least contentious of all. If an act is forced, through design/coercion/any other means, then the agent cannot be held morally responsible for the act. This is also sometimes characterized as free-will. The agent must be able to choose among a multiplicity of actions (even “not acting” is an action) in a given situation, based on some internal states.
  2. Intentionality – The agent must intend to perform the action. Thus, sneezing/reflex actions are not moral actions, even if they affect the environment, positively or negatively. This is the most difficult to show/prove.
  3. Responsibility – Is the agent in a position of responsibility or can the agent be held responsible for the action? (This is why small children are not considered to be full moral agents. They are not thought of as being responsible for the consequences of their actions)

Deborah Johnson considers other conditions1 that can be used to decide whether an action is morally evaluate-able, but concedes that most of those conditions can be met for machines (or at least, met as convincingly as human beings) apart from intending to act.

Autonomy

Human beings like to think of themselves as autonomous, and machines and robots as technological artifacts having no autonomy because these are (ultimately) deterministic programs. Despite recent advances in Machine Learning and AI  where mechanisms have non-deterministic outputs, researchers such as Bringsjord contend that these are not truly autonomous actions, rather actions that are determined by a random factor2. The problem with this line of reasoning is that it applies to human beings as well. We are all products of our environment, our culture, our education, etc. So who is truly autonomous?

Intentionality

Intentionality is, to my mind, the most difficult aspect of the triad. In theory, for most machines/robots/artificial agents, we could have access to their internal states and thereby (posthoc) show that their actions were intentional. However, sufficiently complex internal states along with autonomy would make showing this rigorously, impossible. Again, this argument about non-rigour holds for human beings as well. According to Sullins, this rigour is really not necessary3. Given complex behaviour, when one is forced to rely on what seems to people like pre-disposition or intention, it is enough to call this intentionality. Deborah Johnson, on the other hand, contends that there is very definite intentionality: intentionality of designers and intentionality of users1. She views the robot/autonomous agent as “poised to behave” in the way that the designers intended it to, but the “intentionality of computer systems is inert or latent without the intentionality of the user”.  So, though intentionality on the part of the agent is difficult to prove, many researchers are prepared to concede that some sort of intentionality could exist.

Responsibility

If a machine’s behaviour can only reasonably be described if it has a sense of responsibility, i.e., the behaviour only makes sense if the machine (in the absence of any known malfunction) had a sense of responsibility, then we must ascribe responsibility to the machine3. This effectively means that:

  • if the machine knows that its actions will change the environment, and then autonomically decides on which action to take, then it must ‘want’ that change.
  • If it ‘wants’ this change, and takes action to effect that change, then it is effectively assuming that it is responsible for bringing about that change.

There are two other interesting positions, on whether an agent is a moral agent or not. Nadeau contends that only a fully thought out action with reasoning can be a free action. And if free will is a necessity for moral agency, then human beings are not moral agents! In fact, only a robot/agent can be completely logical and therefore completely free. Hence, if we can build such a machine, it will be the first truly moral agent!4

The other interesting position comes from Luciano Floridi5. He argues that a theory-of-mind based morality is unnecessary, and that morality can be discussed without recourse to anthropomorphically charged and philosophically contested concepts. According to Floridi, the entire notion of agent-hood depends on the level of abstraction from which we view a system. That is, even human beings would not be considered agents, if we viewed human beings as collections of molecules subject to bio-chemical and physical forces. The bio-chemical perspective is technically correct, but unsuitable for a discussion of society, laws or ethics. To discuss such concepts relating to agent-hood, we typically use a different level of abstraction. Hence, it is enough for the machine to be subjected to the same level of abstraction that a human is, while deciding whether the machine is an agent or not. He argues that the right level of abstraction must contain three criteria: (a) interactivity; (b) autonomy;  and (c) adaptability. He then goes on to discuss multiple different machines, that could potentially satisfy each or any of these criteria. He finally posits that any machine that satisfies all three criteria has as much right to agenthood, as a human being.

In between the no-agenthood position and the full-agenthood position, there are other positions to take. Johnson, for instance, takes the position that while machines can never be full-agents, they are quite definitely moral entities1.

1.
Johnson DG. Computer systems: Moral entities but not moral agents. E. 2006;8(4):195-204. doi:10.1007/s10676-006-9111-5
2.
Bringsjord S. Ethical robots: the future can heed us. A. 2007;22(4):539-550. doi:10.1007/s00146-007-0090-9
3.
Sullins JP. When Is a Robot a Moral Agent? In: Anderson M, Anderson SL, eds. Machine Ethics. Cambridge University Press; 0:151-161. doi:10.1017/cbo9780511978036.013
4.
Nadeau JE. Only Androids Can Be Ethical. Thinking about Android Epistemology. 2006:241–248.
5.
Floridi L. On the Morality of Artificial Agents. In: Anderson M, Anderson SL, eds. Machine Ethics. Cambridge University Press; 0:184-212. doi:10.1017/cbo9780511978036.016

Leave a Reply

Your email address will not be published. Required fields are marked *