Super cool physics and AI (and Ethics)

Physicists are (sometimes) some of my favourite people. They do really cool science, and ask questions about the fundamental nature of our reality. It is very gratifying to know that some of them have also started to look back at ourselves, how we use information, and what information can tell us about the nature of our social reality. Some physicists have started to take a deeper look into the algorithms that are used at the heart of physics-computing, and wondering what impacts they (the algorithms) might have on us.

Sophia Chen, writing for W.I.R.E.D magazine wrote about how the field of AI is attracting physicists to peer inside the algorithms. She also mentions me, and some of the questions that I think are really relevant to the field.

ELSI Panel for OpenAAL

Very pleased to be a part of the ELSI panel on the OpenAAL project being spearheaded by the Heriot-Watt University. Ambient assisted living, especially in these locked-down times, will provide us with the ability to provide care to vulnerable segments of the population. However, with the introduction of robots into our personal space, comes the challenge of addressing its impact on the patients, their families, and other care-givers. People respond differently when interacting with machines, than they do with humans. And it is our responsibility to ensure that the outcome of introducing robots takes into account the multiple stakeholders, their views, and any ethical implications of interactions.

The Second Workshop on Implementing Machine Ethics

The Machine Ethics Research Group recently organized the second workshop on Implementing Machine Ethics. Due to the covid-19 lockdown, it was held in an online-only mode, with participants and presenters interacting via Zoom and Sli.do. We had some really thought-provoking talks from a variety of domains (literature, law, philosophy, computer science), and questions from a very engaged audience. All of the presented slides can be found on the website. The abstracts submitted by the presenters have been uploaded to Zenodo, and can be accessed (and cited) via the DOI: DOI

We are hoping to make it a continuing event and have applied to AAAI for acceptance as a regular symposium. There is a mailing list for those interested in continuing discussions or notices of collaboration opportunities. Please click here to sign up. The mailing list is hosted by HEANet, Ireland. You can see a “Subscribe or Unsubscribe” link in the menu on the right hand side, in a box called “Options”. Please do consider joining.

IEEE P7008 Standard – Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems

I have recently started contributing to the IEEE P7008 Standards committee for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems.

A nudge is an overt or covert suggestion or manipulation designed to influence the emotions or behaviour of a user.  Depending on your perspective, this might seem ominous or useful. All human beings are susceptible to behavioural nudges, i.e., manipulation of content, colours, choices, even the order of choices, etc., which encourage / discourage us from a particular behaviour.  Starting from behavioural economics, where Tversky and Kahneman showed that human beings (quite consistently) behave in irrational ways1, there has been considerable research in trying to explain how individuals evaluate choices. This is in contrast to standard economic theory that used to assume that individuals were completely rational, and always self-interested agents. Perhaps the most popular explanation of such behaviour is Dan Ariely’s book – Predictably Irrational. Governments, organizations and companies have been trying to use these insights to influence how populations, societies and consumers behave. This has given rise to the study of the nudge – subtly and adaptively influencing someone’s choice without limiting their actual choices.

Combined with Big Data and Machine Learning, the use of nudges can be used by systems to influence user behaviour. A common example is the use of pre-checked boxes. If the text next to a checkbox is boring/not clearly understood/legalese, a user is tempted to leave it as they found it. That is, if a checkbox is asking for consent to bombard you with marketing material, you are tempted to leave the checkbox checked if you don’t really understand what is being asked, or the language used is such that it insinuates a loss if you uncheck it. This is a nudge because you are quite capable of un-checking the box, but you have chosen to not do it.  Given the power of Big Data, an algorithm could identify which kind of nudges work on you personally, and then ensure that only those nudges are shown to you.  These sort of nudges can clearly be used for all kinds of purposes. For example, ensuring that users make good choices (say, encouraging them to save for retirement) or making choices where the organization benefits (say, accepting tracking cookies).

The IEEE committee’s focus is on how human beings (or design teams and organizations) should ensure that their nudges are ethical. There is clearly no standard system of ethics that everyone subscribes to. So the committee is focussed on specifying a design process which will at least ensure that every company/organization that creates intelligent systems that uses nudges, will explicitly confront the ethical choices it is making.

1.
Tversky A, Kahneman D. Rational Choice and the Framing of Decisions. In: Multiple Criteria Decision Making and Risk Analysis Using Microcomputers. Springer Berlin Heidelberg; 1989:81-126. doi:10.1007/978-3-642-74919-3_4

New PhD student starting work on Machine Ethics

Very excited to welcome Rajitha Ramanayake as a PhD student to work on Machine Ethics. He plans to work on techniques to reliably insert a notion of ethics into autonomous agents. Here’s hoping that it’s a great research journey, and that he enjoys the ride.

His abstract, from his phd application, is as follows:

Artificial Intelligence (AI) techniques such as reinforcement learning and deep learning have enjoyed some success in accomplishing tasks at human-level capability in the recent past. It is expected that several human jobs will be replaced by intelligent and autonomous systems in the near future. When domains that have direct links/involvement with the community, such as health-care, social data mining and advertising, personal assistants and autonomous vehicles, use artificial intelligence, the importance of values and the morality of an AI system comes into the picture. These systems could easily encounter situations that contain a moral dilemma which will affect a community or an individual’s life. Therefore, in order to decide which actions must be taken in such situations, we need to introduce a sense of values into these AI agents. One way to infer a sense of values onto intelligent systems is by embedding ethics into them. As a result, there are many attempts being made to provide AI systems with a sense of ethics. However, there is no general consensus on what type of ethics (i.e. utilitarian ethics, deontological ethics, virtue ethics, the principle of
double effect) can/should be embedded into an AI or what Approach (i.e.  Evolutionary computing, Multi-agent systems, Machine learning, Deep learning, Reinforcement Learning)  would be most suitable for implementing these types of ethics. The need to find philosophical theories and technological methods that can be used to build an artificial agent – and which can be entrusted to act as an ethical agent – is of vital importance in the future of intelligent agents. My research will concern itself with techniques in computer science that can be used to implement ethical behaviour

Let’s come back to this in four years time to see where he’s got 😉

Invited Talk at CERN, Geneva

I was recently invited by Dr. Alberto Di Meglio, head of OpenLab @ CERN to give a talk on Machine Ethics. Super-excited to interact with computer scientists, physicists and folks from multiple disciplines, and gather their views on how AI impacts society and what computer science should be doing about it. Refreshing to know that even high-energy physicists ‘get it’ with regard to our (scientists’) obligations to engage deeply with society. The video of the talk can be found here.

New mailing list on machine ethics created

After the end of the first inter-disciplinary workshop on Implementing Machine Ethics, the participants agreed that it was rare for events to be truly inter-disciplinary, and that they enjoyed talking through and thinking about the multiple perspectives, about ethics inside machines.

To facilitate more discussion on the topic, a mailing list was set up on HEANet’s listserv service. It is meant for everyone who is interested in the inter-disciplinary aspects of howwhy, and what ethics should be implemented in machines. Clicking on the link above will take you to the HEANET website. You can see a subscribe and unsubscribe button in the menu on the right hand side called options. Please do consider joining if you are interested in society, AI, philosophy, technology, law and how they intersect.

Funded PhD position available

Announcing an opportunity for inter-disciplinary research into implementing ethics in computers! Take a look at the new PhD position available in the School of Computer Science, at UCD. If you’re interested in investigating what it would mean for machine (any autonomic agent, really) to be ethical, how one would implement and verify such an agent, join the Machine Ethics Research Group .

We’re also organizing a workshop around the same theme. Check out the First UCD Inter-disciplinary Workshop on Implementing Machine Ethics.

First UCD Inter-Disciplinary Workshop on Implementing Machine Ethics

Really excited to be involved in organizing UCD’s first international inter-disciplinary workshop on Implementing Machine Ethics. If you are interested in AI, ethics, law, society or even in how autonomous machines might affect our future, do check out the workshop. We have an exciting lineup of keynote speakers:

  • Prof. Alan Winfield: Prof. Winfield is a roboticist with the Bristol Robotics Lab. His research interests include robot ethics, cognitive science, artificial intelligence and open science.
  • Dr. Joanna Goodey: Dr. Goodey is the head of the Freedoms and Justice Department in the European Union Agency for Fundamental Rights. Her research interests span criminology, fundamental rights, and human geography.

The relevant dates are as below:

Deadline for submission of abstract: 12-May-2019
Notification of acceptance: 3-June-2019
Workshop Dates: 2-3-July-2019

Location: University College Dublin, Dublin

Here’s the link to the CallForAbstracts