Call for Papers – Implementing Ethical Reasoning in Autonomous Agents

EUMAS’24 will feature a special track on Agents and Ethics. This track is focussed on the implementation of ethical reasoning mechanisms in various autonomous decision-making settings. Depending on the context of the agent (software / hardware), the kinds of values (or value-conflicts) that are encountered could be quite varied. The multiplicity and agency of the stakeholders involved (from highly trained human-machine teams to a cohort of elderly / children), also affect the possible value considerations deeply.  This track aims to provide a venue for discussions of problems, possible solution-concepts, best practices, benchmarks, related to machine ethics. 

 

We welcome original, unpublished papers including improved versions of extended abstracts or rejected papers from AAMAS, AAAI and IJCAI 2023. The submission should describe work that has not been previously published, accepted for publication, nor is currently under review by another conference or journal. Proceedings, as a part of EUMAS 2024, will be published as part of the Lecture Notes in Computer Science (LNCS) series of Springer

 

Submissions are solicited on, but not limited to, the following themes / questions:

 

  • What kinds of values and value-conflicts are (in)expressible in algorithmic form?
  • Can ethical behaviour be guaranteed or verified in computational media?
  • Implementation of ethical reasoning mechanisms founded on non-western ethical traditions
  • Can artificial moral decision-making be decoupled from mere implementation of normative ethical theories? 
  • Privacy and Trust relationships between humans and machines in the presence of hybrid actions
  • Implementation of ethical reasoning in logic-based methods
  • Representation of ethical principles in AI agents
  • Machine-learning based approaches to ethical reasoning
  • Development of machine ethics in cognitive robot programs
  • Robot learning for ethical reasoning
  • Integration of symbolic and neural information systems for ethical reasoning
  • Development of formal frameworks for ethical decision-making
  • Techniques for explaining the ethical reasoning of AI agents
  • Frameworks for ethical collaboration between humans and AI agents
  • Building trust in intelligent systems through ethical design and interaction

 

Important Dates (all deadlines – AoE)

 

Submission Deadline 15-May-2024
Author Notification  30-June-2024
Camera-ready Deadline 15-July-2024
Conference Dates 26-28 August-2024

 

Logistics

Paper size and format – All submissions will be peer-reviewed in a single blind fashion. Submission should be at most 15 pages long, with any number of pages for references and, if needed, a clearly marked appendix and be formatted according to Springer’s LNCS format.

 

Submission Link: https://easychair.org/conferences/?conf=eumas2024

 

Organizing Committee

Vivek Nallur, University College Dublin, Ireland

Marija Slavkovik, University of Bergen, Norway

Sarah Moth-Lund Christensen, Leeds University, UK

 

Programme Committee

Michael Anderson, University of Hartford

Selmer Bringsjord, Rensselaer Polytechnic Institute (RPI)

Louise Dennis, University of Manchester

Maurice Pagnucco, University of New South Wales

Yang Song, University of New South Wales

Ken Satoh, National Institute of Informatics Japan

Leon van der Torre, University of Luxembourg

Jean-Gabriel Ganascia, Sorbonne, France

John P. Sullins, Sonoma State University

Kevin Baum, University of Saarbrucken

Edmond Awad, University of Exeter

John Horty, University of Maryland

Pere Pardo, University of Luxembourg

Thomas Powers, University of Delaware

Juan Carlos Nieves, Umea University

Ryan Tonkens, Lakehead University

Paul Formosa, Macquarie University, Australia

Special Track on Implementing Machine Ethics at EUMAS 2024

EUMAS 2024 is being held at School of Computer Science, University College Dublin in August’2024. The Call for Papers is out, and I’m super excited to announce that they have a special track on implementing Machine Ethics, being led by yours truly, Marija Slavkovik (University of Bergen) and Sarah Moth-Lund Christensen (University of Leeds). Looking forward to all the exciting work being done in this area!!

Submit your papers at: https://easychair.org/my/conference?conf=eumas2024

Paper submission deadline: 15 May 2024 (AoE)

Here’s the blurb from the conference website:

Submissions are solicited on, but not limited to, the following themes / questions:

  1. What kinds of values and value-conflicts are (in)expressible in algorithmic form?
  2. Can ethical behaviour be guaranteed or verified in computational media?
  3. Implementation of ethical reasoning mechanisms founded on non-western ethical traditions
  4. Can artificial moral decision-making be decoupled from mere implementation of normative ethical theories?
  5. Privacy and Trust relationships between humans and machines in the presence of hybrid actions
  6. Implementation of ethical reasoning in logic-based methods
  7. Representation of ethical principles in AI agents
  8. Machine-learning based approaches to ethical reasoning
  9. Development of machine ethics in cognitive robot programs
  10. Robot learning for ethical reasoning
  11. Integration of symbolic and neural information systems for ethical reasoning
  12. Development of formal frameworks for ethical decision-making
  13. Techniques for explaining the ethical reasoning of AI agents
  14. Frameworks for ethical collaboration between humans and AI agents
  15. Building trust in intelligent systems through ethical design and interaction

The Programme Committee consists of the following:

  • Michael Anderson, University of Hartford
  • Selmer Bringsjord, Rensselaer Polytechnic Institute (RPI)
  • Louise Dennis, University of Manchester
  • Maurice Pagnucco, University of New South Wales
  • Yang Song, University of New South Wales
  • Ken Satoh, National Institute of Informatics Japan
  • Leon van der Torre, University of Luxembourg
  • Jean-Gabriel Ganascia, Sorbonne, France
  • John P. Sullins, Sonoma State University
  • Kevin Baum, University of Saarbrucken
  • Edmond Awad, University of Exeter
  • John Horty, University of Maryland
  • Pere Pardo, University of Luxembourg
  • Thomas Powers, University of Delaware
  • Juan Carlos Nieves, Umea University
  • Ryan Tonkens, Lakehead University
  • Paul Formosa, Macquarie University, Australia

Super cool physics and AI (and Ethics)

Physicists are (sometimes) some of my favourite people. They do really cool science, and ask questions about the fundamental nature of our reality. It is very gratifying to know that some of them have also started to look back at ourselves, how we use information, and what information can tell us about the nature of our social reality. Some physicists have started to take a deeper look into the algorithms that are used at the heart of physics-computing, and wondering what impacts they (the algorithms) might have on us.

Sophia Chen, writing for W.I.R.E.D magazine wrote about how the field of AI is attracting physicists to peer inside the algorithms. She also mentions me, and some of the questions that I think are really relevant to the field.

ELSI Panel for OpenAAL

Very pleased to be a part of the ELSI panel on the OpenAAL project being spearheaded by the Heriot-Watt University. Ambient assisted living, especially in these locked-down times, will provide us with the ability to provide care to vulnerable segments of the population. However, with the introduction of robots into our personal space, comes the challenge of addressing its impact on the patients, their families, and other care-givers. People respond differently when interacting with machines, than they do with humans. And it is our responsibility to ensure that the outcome of introducing robots takes into account the multiple stakeholders, their views, and any ethical implications of interactions.

The Second Workshop on Implementing Machine Ethics

The Machine Ethics Research Group recently organized the second workshop on Implementing Machine Ethics. Due to the covid-19 lockdown, it was held in an online-only mode, with participants and presenters interacting via Zoom and Sli.do. We had some really thought-provoking talks from a variety of domains (literature, law, philosophy, computer science), and questions from a very engaged audience. All of the presented slides can be found on the website. The abstracts submitted by the presenters have been uploaded to Zenodo, and can be accessed (and cited) via the DOI: DOI

We are hoping to make it a continuing event and have applied to AAAI for acceptance as a regular symposium. There is a mailing list for those interested in continuing discussions or notices of collaboration opportunities. Please click here to sign up. The mailing list is hosted by HEANet, Ireland. You can see a “Subscribe or Unsubscribe” link in the menu on the right hand side, in a box called “Options”. Please do consider joining.

IEEE P7008 Standard – Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems

I have recently started contributing to the IEEE P7008 Standards committee for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems.

A nudge is an overt or covert suggestion or manipulation designed to influence the emotions or behaviour of a user.  Depending on your perspective, this might seem ominous or useful. All human beings are susceptible to behavioural nudges, i.e., manipulation of content, colours, choices, even the order of choices, etc., which encourage / discourage us from a particular behaviour.  Starting from behavioural economics, where Tversky and Kahneman showed that human beings (quite consistently) behave in irrational ways1, there has been considerable research in trying to explain how individuals evaluate choices. This is in contrast to standard economic theory that used to assume that individuals were completely rational, and always self-interested agents. Perhaps the most popular explanation of such behaviour is Dan Ariely’s book – Predictably Irrational. Governments, organizations and companies have been trying to use these insights to influence how populations, societies and consumers behave. This has given rise to the study of the nudge – subtly and adaptively influencing someone’s choice without limiting their actual choices.

Combined with Big Data and Machine Learning, the use of nudges can be used by systems to influence user behaviour. A common example is the use of pre-checked boxes. If the text next to a checkbox is boring/not clearly understood/legalese, a user is tempted to leave it as they found it. That is, if a checkbox is asking for consent to bombard you with marketing material, you are tempted to leave the checkbox checked if you don’t really understand what is being asked, or the language used is such that it insinuates a loss if you uncheck it. This is a nudge because you are quite capable of un-checking the box, but you have chosen to not do it.  Given the power of Big Data, an algorithm could identify which kind of nudges work on you personally, and then ensure that only those nudges are shown to you.  These sort of nudges can clearly be used for all kinds of purposes. For example, ensuring that users make good choices (say, encouraging them to save for retirement) or making choices where the organization benefits (say, accepting tracking cookies).

The IEEE committee’s focus is on how human beings (or design teams and organizations) should ensure that their nudges are ethical. There is clearly no standard system of ethics that everyone subscribes to. So the committee is focussed on specifying a design process which will at least ensure that every company/organization that creates intelligent systems that uses nudges, will explicitly confront the ethical choices it is making.

1.
Tversky A, Kahneman D. Rational Choice and the Framing of Decisions. In: Multiple Criteria Decision Making and Risk Analysis Using Microcomputers. Springer Berlin Heidelberg; 1989:81-126. doi:10.1007/978-3-642-74919-3_4

New PhD student starting work on Machine Ethics

Very excited to welcome Rajitha Ramanayake as a PhD student to work on Machine Ethics. He plans to work on techniques to reliably insert a notion of ethics into autonomous agents. Here’s hoping that it’s a great research journey, and that he enjoys the ride.

His abstract, from his phd application, is as follows:

Artificial Intelligence (AI) techniques such as reinforcement learning and deep learning have enjoyed some success in accomplishing tasks at human-level capability in the recent past. It is expected that several human jobs will be replaced by intelligent and autonomous systems in the near future. When domains that have direct links/involvement with the community, such as health-care, social data mining and advertising, personal assistants and autonomous vehicles, use artificial intelligence, the importance of values and the morality of an AI system comes into the picture. These systems could easily encounter situations that contain a moral dilemma which will affect a community or an individual’s life. Therefore, in order to decide which actions must be taken in such situations, we need to introduce a sense of values into these AI agents. One way to infer a sense of values onto intelligent systems is by embedding ethics into them. As a result, there are many attempts being made to provide AI systems with a sense of ethics. However, there is no general consensus on what type of ethics (i.e. utilitarian ethics, deontological ethics, virtue ethics, the principle of
double effect) can/should be embedded into an AI or what Approach (i.e.  Evolutionary computing, Multi-agent systems, Machine learning, Deep learning, Reinforcement Learning)  would be most suitable for implementing these types of ethics. The need to find philosophical theories and technological methods that can be used to build an artificial agent – and which can be entrusted to act as an ethical agent – is of vital importance in the future of intelligent agents. My research will concern itself with techniques in computer science that can be used to implement ethical behaviour

Let’s come back to this in four years time to see where he’s got 😉

Invited Talk at CERN, Geneva

I was recently invited by Dr. Alberto Di Meglio, head of OpenLab @ CERN to give a talk on Machine Ethics. Super-excited to interact with computer scientists, physicists and folks from multiple disciplines, and gather their views on how AI impacts society and what computer science should be doing about it. Refreshing to know that even high-energy physicists ‘get it’ with regard to our (scientists’) obligations to engage deeply with society. The video of the talk can be found here.

New mailing list on machine ethics created

After the end of the first inter-disciplinary workshop on Implementing Machine Ethics, the participants agreed that it was rare for events to be truly inter-disciplinary, and that they enjoyed talking through and thinking about the multiple perspectives, about ethics inside machines.

To facilitate more discussion on the topic, a mailing list was set up on HEANet’s listserv service. It is meant for everyone who is interested in the inter-disciplinary aspects of howwhy, and what ethics should be implemented in machines. Clicking on the link above will take you to the HEANET website. You can see a subscribe and unsubscribe button in the menu on the right hand side called options. Please do consider joining if you are interested in society, AI, philosophy, technology, law and how they intersect.