Research

I’m very interested in complex self-adaptive systems, engineering emergent feedback loops, predicting and controlling emergence in humano-tech systems (where technical systems interact heavily with human desires/abilities), engineering robust systems from non-robust parts. If you’re interested in collaborating, or just want to chat about a specific topic, get in touch.

Multi-Agent Systems

Multi-Agent Systems (MAS) are my preferred tool for approaching problems in self-adaptation, complexity, emergence, etc. They lend themselves to extensive forms of experimentation: having all agents follow simple rules, implementing complex machine-learning algorithms, investigating the interplay of different algorithms being used at the same time, are all possible with relatively simple conceptual structures. Decoding the end result and teasing out the real factor(s) responsible for a particular behaviour is considerably more difficult :-). But, that’s a part of the fun!

Game Theory

Game theory mostly tries to understand what happens when people interact in a rational manner. The fact that people don’t always behave rationally has been painfully obvious to many behavioural economists. However, this doesn’t mean that Game Theory can’t be used to make useful predictions. In fact, combining Games with Multi-Agent systems that adapt is an active area of my research. We have a new simulator (called Arena) that can easily simulate tens of thousands of agents simultaneously playing multiple games in any fashion that the researcher wants. Machine learning agents vs evolutionary agents vs my-new-really-great-strategy? It can all be easily done, with just a bit of configuration and coding. Check it out here.

Decentralized Self-Adaptation

Self-Adaptation as a concept, has been recognized for a long time, from biological systems to human affairs. Most natural systems exhibit this phenomena, viz, the effecting of change by a system to ensure that it continues to achieve the utility that it previously did. Different self-adaptive systems exhibit adaptivity in different ways. In human-designed systems, the first systematic efforts to create a self-adaptive system have been in the domain of control loop design. Regardless of the type, most systems can be differentiated on the basis on where the locus of control for self-adaptation lies:

  • Centralized: In these type of systems, there is usually a hierarchy of components. Components at higher levels are responsible for goal management and planning for change while those at lower levels are responsible for immediate action and feedback. Decision-making is concentrated in one or a closely related set of components. Centralized self-adaptive systems exhibit a communication pattern that is characterized by sensory information (data) flowing from dumb components to the central decision maker, and instructions (commands) flowing from the decision maker to the dumb components. ‘Dumb’is used here, in the sense of a component having no awareness of itself and its effects on the environment. Predictable and cohesive response to change are advantages of this type of system. However, reaction times get slower and slower as the size of the system increases.
  • Decentralized: On the other hand, decentralized systems do not have a hierarchy of components. Each component acts as an individual agent with its own goals, and its own perception of the environment. This has the advantage of quick reaction to change. But it also has the disadvantage of being fragmented. That is, different components may react differently to the same change stimulus. This could make the self-adaptation inefficient or even deleterious. The challenge in building a decentralized system is to ensure that all the agents collectively move the system towards a common goal. This is usually accomplished by agents communicating among themselves. However, the lack of a centralized communication pattern means that the communication protocol must be decentralized and environment-based

COINE: Coordination, Organization, Institutions, Norms, Ethics

These are the particular problems using Multi-Agent Systems, that I’m interested in solving.

Coordination

  • How do we create coordination mechanisms for (really) large open distributed systems, which have autonomous components?
    • Guarantee deadlock-freeness, for example

Organization

  • How can we depend on self-organization to meet system goals? For example, will self-organization result in distributive justice? What techniques/conditions can guarantee it?

Institutions

  • What structural patterns are most conducive to emergent institutions? Can we predict or control this emergence?

Norms

  • How do we engineer an intelligent socio-technical system such that a self-sustaining/enduring MAS is an emergent property?
  • Can we ensure some form of computational social justice among autonomous agents?

Ethics

  • How do we create an ethical autonomous system?
  • Whose ethics should we incorporate into the system?
  • Should these ethical values change as the system lives with interacting humans?