Welcome to Vivek Nallur’s (mostly) academic site on the web. I am an academic who is interested in computational machine ethics, complex systems, emergence, and decentralized mechanisms of adaptation.
I’m very interested in complex self-adaptive systems, engineering emergent feedback loops, predicting and controlling emergence in humano-tech systems (where technical systems interact heavily with human desires/abilities), engineering robust systems frozm non-robust parts. If you’re interested in collaborating, or just want to chat about a specific topic, get in touch.
Projects I’m Involved In
COTHROM – Computing Thoughtful Rules for Migrants
CONSUS – Crop Optimisation through Sensing Understanding and Visualization – Digital, precision agriculture and crop science
COMBAT – COvid-19 Modelling through Agent-Based Techniques
IEEE P7008 Working Group on Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
ELSI Panel Member, Open Ambient Assisted Living – The OpenAAL project targets the fast co-creation of scalable and affordable solutions to support the care of vulnerable people
One of the privileges of being an academic is that I get to work with wonderful PhD students. These are mine (in order of starting):
Harshani Nagahamulla (2019): Harshani is a part of the CONSUS project and her work focusses on intelligent decision support with a focus on providing counter-factual analysis (what-if/what-if-not scenarios)
Rajitha Ramanayake (2020): Rajitha is a part of the Machine Ethics Research Group. His research is focussed on investigating the creation of ethical models in autonomous agents
- Implementing Pro-social Rule Bending in an Elder-care Robot Environment (preprint) 15th International Conference on Social Robotics, Doha, Qatar, December 2023. Many ethical issues arise when robots are introduced into elder-care settings. When ethically charged situations occur, robots ought to be able to handle them appropriately. Some experimental approaches use (top-down) moral generalist approaches, like Deontology and Utilitarianism, to implement ethical decision-making. Others have advocated the use of bottom-up approaches, such as learning algorithms, to learn ethical patterns from human behaviour. Both approaches have their shortcomings when it comes to real-world implementations. Human beings have been observed to use a hybrid form of ethical reasoning called Pro-Social Rule Bending, where top-down rules and constraints broadly apply, but in particular situations, certain rules are temporarily bent. This paper reports on implementing such a hybrid ethical reasoning approach in elder-care robots. We show through simulation studies that it leads to better upholding of human values such as autonomy, whilst not sacrificing beneficence.
- Statutory Professions in AI Governance and their consequences for explainable AI (preprint) 1st World Conference on eXplainable Artificial Intelligence (xAI-2023), Lisbon, Portugal, July 2023. Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals. While regulatory frameworks are being developed, there remains a lack of consensus on methods necessary to deliver safe AI. The potential for explainable AI (XAI) to contribute to the effectiveness of the regulation of AI is being increasingly examined. Regulation must include methods to ensure compliance on an ongoing basis, though there is an absence of practical proposals on how to achieve this. For XAI to be successfully incorporated into a regulatory system, the individuals who are engaged in interpreting/explaining the model to stakeholders should be sufficiently qualified for the role. Statutory professionals are prevalent in domains in which harm can be done to the health, safety and rights of individuals. The most obvious examples are doctors, engineers and lawyers. Those professionals are required to exercise skill and judgement and to defend their decision making process in the event of harm occurring. We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework for compliance and monitoring purposes. We will refer to this new statutory professional as an AI Architect (AIA). This AIA would be responsible to ensure the risk of harm is minimised and accountable in the event that harms occur. The AIA would also be relied on to provide appropriate interpretations/explanations of XAI models to stakeholders. Further, in order to satisfy themselves that the models have been developed in a satisfactory manner, the AIA would require models to have appropriate transparency. Therefore it is likely that the introduction of an AIA system would lead to an increase in the use of XAI to enable AIA to discharge their professional obligations.
- Anxiety Among Migrants – Questions for Agent Simulation (preprint) Nominated for Most Visionary Paper at IDEA Workshop, AAMAS, June 2023. This paper starts with hypothesis (and presents some evidence) that anxiety in migrants is sufficiently important to be modelled. It presents a small (and very incomplete) review of emotion modelling in literature. It asks the question of how to translate these into agent-based modelling, and whether this can be orthogonal to specific modelling of goals and capabilities of agents. This short paper is offered as a motivator for discussion, rather than a discussion of results.
- A Partially Synthesized Position on the Automation of Machine Ethics (preprint) (online version) Digital Society, 2(2): 14, August 2023. We economically express our respective prior positions on the automation of machine ethics, and then seek a corporate, partly synthesized position that could underlie, at least to a degree, our future machine-ethics work, and such work by others as well.
- A Small Set of Ethical Challenges For Elder-care Robots (preprint)(online version) Robophilosophy Conference Series, University of Helsinki, August 2022. Elder-care robots have been suggested as a solution for the rising eldercare needs. Although many elder-care agents are commercially available, there are concerns about the behaviour of these robots in ethically charged situations. However, we do not find any evidence of ethical reasoning abilities in commercial offerings. Assuming that this is due to the lack of agreed-upon standards, we offer a categorization of elder-care robots, and ethical ‘whetstones’ for them to hone their abilities.
- Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (online version) Proceedings of the 28th Irish Conference on Artificial Intelligence and Cognitive Science. Vol:2771. Pages: 133-144. While Artificial Intelligence (AI) is near ubiquitous, there is no effective control framework within which it is being advanced. Without a control framework, trustworthiness of AI is impacted. This negatively affects adoption of AI and reduces its potential for social benefit. For international trade and technology cooperation, effective regulatory frameworks need to be created. This study presents a thematic analysis of national AI strategies for European countries in order to assess the appetite for an AI regulatory framework. A Declaration of Cooperation on AI was signed by EU members and non-members in 2018. Many of the signatories have adopted national strategies on AI. In general there is a high level of homogeneity in the national strategies. An expectation of regulation, in some form, is expressed in the strategies, though a reference to AI specific legislation is not universal. With the exception of some outliers, international cooperation is supported. The shape of effective AI regulation has not been agreed upon by stakeholders but governments are expecting and seeking regulatory frameworks. This indicates an appetite for regulation. The international focus has been on regulating AI solutions and not on the regulation of individuals. The introduction of a professional regulation system may be a complementary or alternative regulatory strategy. Whether the appetite and priorities seen in Europe are mirrored worldwide will require a broader study of the national AI strategy landscape.
- Landscape of Machine Implemented Ethics (preprint) (online version via Springer Nature Sharedit) Journal of Science and Engineering Ethics. DOI: 10.1007/s11948-020-00236-y This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating ‘whetstones’ and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.
- AI, Society & Media – March’2020 Dublin City University, Dublin
- Intelligence & Ethics in Machines: Utopia or Dystopia – August’2019 CERN, Geneva
- Machine Ethics Landscape – March’2019 University of Helsinki, Helsinki
Can machines can be programmed to be ethical? Answering this question requires interrogating ourselves to understand what ethics are, and how they develop. Computer Science is not well-positioned to answer these critical questions by itself, and therefore needs to collaborate with other disciplines, such as philosophy, sociology, law and even literature! Depending on how we collectively approach the problem will determine if a machine can be programmed to make ethical choices and adapt to evolving situations, or if it can only be programmed to follow specific rules. Increasingly computers are taking on roles where they might have to prioritize actions based on ethical judgements, for example care robots. Machine Ethics explores if human intervention will always be required in order to ensure the machine’s ethical behaviour, or if an ethical framework can be designed and implemented in a way that is socially acceptable. The Machine Ethics Research Group has organized two inter-disciplinary workshops, on Implementing Machine Ethics which were well-attended and resulted in robust discussion across multiple disciplines.
Multi-Agent Systems (MAS) are my preferred tool for approaching problems in self-adaptation, complexity, emergence, etc. They lend themselves to extensive forms of experimentation: having all agents follow simple rules, implementing complex machine-learning algorithms, investigating the interplay of different algorithms being used at the same time, are all possible with relatively simple conceptual structures. Decoding the end result and teasing out the real factor(s) responsible for a particular behaviour is considerably more difficult :-). But, that’s a part of the fun!
To know more about my professional activities, take a look at my research and teaching pages. If you are interested in doing a PhD with me, take a look at the doing a PhD , go through the research section, and try to come up with a 1-page proposal that conveys the gist of your idea and how it dovetails with my research interests.