If we wanted to implement ethics in a robot, how would we do it? What ethics should we implement? Are Asimov’s three laws enough?
Although, there seems to be philosophical consensus that Asimov’s Three Laws of Robotics are nowhere near enough for a satisfactory ethical robot, there are still attempts at creating robots that do implement some version of Asimov’s Laws. Perhaps this tells us something about ourselves, as human beings. Even for researchers, fiction exerts a power of imagination and framing that is difficult to shake off.
How do we convince others (or even ourselves) that we’ve created an ethical machine? Is it better to create machines that have some understanding of the philosophical principle, and can therefore reason about why it should or should not perform an action? Or should we insist on machines that will never misbehave? The answers to these questions drive how we implement ethics. Regardless of the actual ethical principle involved, our preference for reasoning vis-a-vis safety will influence how ethics are actually embedded in the machine.
Logic and Model Checking
An interesting approach to implementing ethical principles into robots is the HERA approach1. HERA stands for Hybrid Ethical Reasoning Agents and assumes that there is no right ethical theory to implement. Rather, they implement multiple moral theories, which are modelled using logical formulae. The formulae are then evaluated for their `truth’, given the consequences that arise from actions that the robot could potentially take. If a particular formula resolves to true for a certain action, then the robot is able to conclude that that action is allowed by the ethical principle implemented by that formula. This is interesting because, rather than being tied to a single ethical stance, the robot is able to evaluate the same action from multiple ethical principles and allow humans to (potentially) pick which principle to prioritize. Actions and consequences are modelled using directed acyclic graphs in a causal agency model, which are then checked using a model checker.
Cognition
Instead of using logic or symbolic reasoning, Vanderelst and Winfield propose a cognition based approach to implementing ethics2. The cognition based approach is based on some amount of evidence that considers simulation to be a key factor in ‘thinking’3. That is, it appears that that functions like behaviour, perception, and anticipation are made possible for human beings, due to the presence of structures in the brain that can simulate interaction with the outside world. Effectively, what simulation theory says is that thinking is the act of simulating interactions with the external environment, without actually having overt actions. Vanderelst and Winfield propose that this can be achieved in robots through the use of a sophisticated simulation module. Most robots follow a three-layered control architecture4, with each layer acting at different time-scales and different levels of abstraction. This is enhanced with the use of one more layer called the Ethical layer. The ethical layer consists of two modules: simulation module and the evaluation module. For each possible behaviour that the robot can do, the simulation module sends a prediction of the robot and the external environment’s states to the evaluation module. The evaluation module then assigns a value to the combination of the robot’s predicted state and the world’s predicted state. Based on this value, the robot chooses to either behave in a particular manner or not.