Welcome to Vivek Nallur’s (mostly) academic site on the web. I am an academic who is interested in machine ethics, complex systems, emergence, and decentralized mechanisms of adaptation.
I’m very interested in complex self-adaptive systems, engineering emergent feedback loops, predicting and controlling emergence in humano-tech systems (where technical systems interact heavily with human desires/abilities), engineering robust systems from non-robust parts. If you’re interested in collaborating, or just want to chat about a specific topic, get in touch.
Projects I’m Involved In
COMBAT – COvid-19 Modelling through Agent-Based Techniques
IEEE P7008 Working Group on Standard for Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
ELSI Panel Member, Open Ambient Assisted Living – The OpenAAL project targets the fast co-creation of scalable and affordable solutions to support the care of vulnerable people
One of the privileges of being an academic is that I get to work with wonderful PhD students. These are mine (in order of starting):
Harshani Nagahamulla (2019): Harshani is a part of the CONSUS project and her work focusses on intelligent decision support with a focus on providing counter-factual analysis (what-if/what-if-not scenarios)
Rajitha Ramanayake (2020): Rajitha is a part of the Machine Ethics Research Group. He has started on his PhD very recently, and is focussed on investigating the creation of ethical models in autonomous agents
- Assessing the Appetite for Trustworthiness and the Regulation of Artificial Intelligence in Europe (online version) Proceedings of the 28th Irish Conference on Artificial Intelligence and Cognitive Science. Vol:2771. Pages: 133-144. While Artificial Intelligence (AI) is near ubiquitous, there is no effective control framework within which it is being advanced. Without a control framework, trustworthiness of AI is impacted. This negatively affects adoption of AI and reduces its potential for social benefit. For international trade and technology cooperation, effective regulatory frameworks need to be created. This study presents a thematic analysis of national AI strategies for European countries in order to assess the appetite for an AI regulatory framework. A Declaration of Cooperation on AI was signed by EU members and non-members in 2018. Many of the signatories have adopted national strategies on AI. In general there is a high level of homogeneity in the national strategies. An expectation of regulation, in some form, is expressed in the strategies, though a reference to AI specific legislation is not universal. With the exception of some outliers, international cooperation is supported. The shape of effective AI regulation has not been agreed upon by stakeholders but governments are expecting and seeking regulatory frameworks. This indicates an appetite for regulation. The international focus has been on regulating AI solutions and not on the regulation of individuals. The introduction of a professional regulation system may be a complementary or alternative regulatory strategy. Whether the appetite and priorities seen in Europe are mirrored worldwide will require a broader study of the national AI strategy landscape.
- Landscape of Machine Implemented Ethics (preprint) (online version via Springer Nature Sharedit) Journal of Science and Engineering Ethics. DOI: 10.1007/s11948-020-00236-y This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating ‘whetstones’ and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.
- “EHLO WORLD” – Checking if your conversational AI knows right from wrong (preprint) Accepted at SoCAI, AISB (postponed due to Covid-19) In this paper we discuss approaches to evaluating and validating the ethical claims of a Conversational AI system. We outline considerations around both a top-down regulatory approach and bottom-up processes. We describe the ethical basis for each approach and propose a hybrid which we demonstrate by taking the case of a customer service chatbot as an example. We speculate on the kinds of top-down and bottom-up processes that would need to
exist for a hybrid framework to successfully function as both an enabler as well as a shepherd among multiple use-cases and multiple
competing AI solutions.
- AI, Society & Media – March’2020 Dublin City University, Dublin
- Intelligence & Ethics in Machines: Utopia or Dystopia – August’2019 CERN, Geneva
- Machine Ethics Landscape – March’2019 University of Helsinki, Helsinki
Can machines can be programmed to be ethical? Answering this question requires interrogating ourselves to understand what ethics are, and how they develop. Computer Science is not well-positioned to answer these critical questions by itself, and therefore needs to collaborate with other disciplines, such as philosophy, sociology, law and even literature! Depending on how we collectively approach the problem will determine if a machine can be programmed to make ethical choices and adapt to evolving situations, or if it can only be programmed to follow specific rules. Increasingly computers are taking on roles where they might have to prioritize actions based on ethical judgements, for example care robots. Machine Ethics explores if human intervention will always be required in order to ensure the machine’s ethical behaviour, or if an ethical framework can be designed and implemented in a way that is socially acceptable. The Machine Ethics Research Group has organized two inter-disciplinary workshops, on Implementing Machine Ethics which were well-attended and resulted in robust discussion across multiple disciplines.
Multi-Agent Systems (MAS) are my preferred tool for approaching problems in self-adaptation, complexity, emergence, etc. They lend themselves to extensive forms of experimentation: having all agents follow simple rules, implementing complex machine-learning algorithms, investigating the interplay of different algorithms being used at the same time, are all possible with relatively simple conceptual structures. Decoding the end result and teasing out the real factor(s) responsible for a particular behaviour is considerably more difficult :-). But, that’s a part of the fun!
To know more about my professional activities, take a look at my research and teaching pages. If you are interested in doing a PhD with me, take a look at the doing a PhD , go through the research section, and try to come up with a 1-page proposal that conveys the gist of your idea and how it dovetails with my research interests.