First UCD Inter-Disciplinary Workshop on Implementing Machine Ethics

Really excited to be involved in organizing UCD’s first international inter-disciplinary workshop on Implementing Machine Ethics. If you are interested in AI, ethics, law, society or even in how autonomous machines might affect our future, do check out the workshop. We have an exciting lineup of keynote speakers:

  • Prof. Alan Winfield: Prof. Winfield is a roboticist with the Bristol Robotics Lab. His research interests include robot ethics, cognitive science, artificial intelligence and open science.
  • Dr. Joanna Goodey: Dr. Goodey is the head of the Freedoms and Justice Department in the European Union Agency for Fundamental Rights. Her research interests span criminology, fundamental rights, and human geography.

The relevant dates are as below:

Deadline for submission of abstract: 12-May-2019
Notification of acceptance: 3-June-2019
Workshop Dates: 2-3-July-2019

Location: University College Dublin, Dublin

Here’s the link to the CallForAbstracts

Implementing Ethics in Machines

If we wanted to implement ethics in a robot, how would we do it? What ethics should we implement? Are Asimov’s three laws enough?

Although, there seems to be philosophical consensus that Asimov’s Three Laws of Robotics are nowhere near enough for a satisfactory ethical robot, there are still attempts at creating robots that do implement some version of Asimov’s Laws. Perhaps this tells us something about ourselves, as human beings. Even for researchers, fiction exerts a power of imagination and framing that is difficult to shake off.

Read More

Can Machines be Moral Agents?

The question of whether a machine can ever be a moral agent, i.e., be an agent that can be considered morally responsible, is dependent on which side of the philosophical debate one falls on. Normally, we consider human beings to be moral agents, but technology created by humans to be morally neutral. That is, a knife is not a moral entity and no one expects to reprimand the knife if it cuts one’s hand or praise it if it slices through vegetables. The human wielding the knife is usually responsible for the actions done, using the knife. Thus, the knife’s effect on the world, whether positive or negative, is not the responsibility of the knife, even though it is clearly the consequence of the knife. Does this mean that only human beings are morally responsible, for any consequence of technological objects in the environment? And are any artificial artifacts exempt from morality? Clearly, this is not entirely black-and-white. For instance, infants, and children are usually not expected to be morally responsible for their actions, while artificial persons (such as corporations) are. So, the natural-ness or artificiality is not the sole determining criteria.

Read More

What is Machine Ethics?

What does it mean for a machine to have ethics? Is that even possible? Why would we care if a machine is ethical or not, as long as it does what we tell it to do?

There are a variety of positions on each of these questions. The two extreme positions on whether a machine can be ethical are as follows:

  • Machines can never have ethics, since machines are bereft of emotions and emotions are integral to ethics.
  • Humans are not special. It is entirely possible that some machines could have a sense of ethics.

Read More

Machine Ethics – Welcome

This is the first post on the Machine Ethics blog. In this blog, I and some guests, will be posting article-reviews, positions, thoughts, and (sometimes) news about the field of Machine Ethics.  These may relate to the field of Artificial Intelligence, Philosophy, Logic, Law, Politics or (better yet) all of them!!

If you want to contribute a post, get in touch.