IEEE P7008 Standard – Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems

I have recently started contributing to the IEEE P7008 Standards committee for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems.

A nudge is an overt or covert suggestion or manipulation designed to influence the emotions or behaviour of a user.  Depending on your perspective, this might seem ominous or useful. All human beings are susceptible to behavioural nudges, i.e., manipulation of content, colours, choices, even the order of choices, etc., which encourage / discourage us from a particular behaviour.  Starting from behavioural economics, where Tversky and Kahneman showed that human beings (quite consistently) behave in irrational ways1, there has been considerable research in trying to explain how individuals evaluate choices. This is in contrast to standard economic theory that used to assume that individuals were completely rational, and always self-interested agents. Perhaps the most popular explanation of such behaviour is Dan Ariely’s book – Predictably Irrational. Governments, organizations and companies have been trying to use these insights to influence how populations, societies and consumers behave. This has given rise to the study of the nudge – subtly and adaptively influencing someone’s choice without limiting their actual choices.

Combined with Big Data and Machine Learning, the use of nudges can be used by systems to influence user behaviour. A common example is the use of pre-checked boxes. If the text next to a checkbox is boring/not clearly understood/legalese, a user is tempted to leave it as they found it. That is, if a checkbox is asking for consent to bombard you with marketing material, you are tempted to leave the checkbox checked if you don’t really understand what is being asked, or the language used is such that it insinuates a loss if you uncheck it. This is a nudge because you are quite capable of un-checking the box, but you have chosen to not do it.  Given the power of Big Data, an algorithm could identify which kind of nudges work on you personally, and then ensure that only those nudges are shown to you.  These sort of nudges can clearly be used for all kinds of purposes. For example, ensuring that users make good choices (say, encouraging them to save for retirement) or making choices where the organization benefits (say, accepting tracking cookies).

The IEEE committee’s focus is on how human beings (or design teams and organizations) should ensure that their nudges are ethical. There is clearly no standard system of ethics that everyone subscribes to. So the committee is focussed on specifying a design process which will at least ensure that every company/organization that creates intelligent systems that uses nudges, will explicitly confront the ethical choices it is making.

1.
Tversky A, Kahneman D. Rational Choice and the Framing of Decisions. In: Multiple Criteria Decision Making and Risk Analysis Using Microcomputers. Springer Berlin Heidelberg; 1989:81-126. doi:10.1007/978-3-642-74919-3_4

Leave a Reply

Your email address will not be published. Required fields are marked *