Special Issue on

Multiagent Interaction without Prior Coordination

Published in the Journal of Autonomous Agents and Multi-Agent Systems



Interaction between agents is the defining attribute of multiagent systems, encompassing problems of planning in a decentralized setting, learning other agent models, composing teams with high task performance, and selected resource-bounded communication and coordination. There is significant variety in methodologies used to solve such problems, including symbolic reasoning about negotiation and argumentation, distributed optimization methods, machine learning methods such as multiagent reinforcement learning, etc. The majority of these well studied methods depend on some form of prior coordination. Often, the coordination is at the level of problem definition. For example, learning algorithms may assume that all agents share a common learning method or prior beliefs, distributed optimization methods may assume specific structural constraints regarding the partition of state space or cost/rewards, and symbolic methods often make strong assumptions regarding norms and protocols. However, in realistic problems, these assumptions are easily violated – calling for new models and algorithms that specifically address the case of multiagent interaction without prior coordination. Similar issues are also becoming increasingly more pertinent in human-machine interactions, where there is a need for intelligent adaptive behaviour and assumptions regarding prior knowledge and communication are problematic.

This special issue seeks mature high-quality research related to multiagent interaction without prior coordination. This includes empirical and theoretical investigations of issues arising from assumptions regarding prior coordination in interactive settings, as well as solutions in the form of novel models and algorithms for effective multiagent interaction without prior coordination.


A non-exclusive list of relevant topics includes:

Submission Details

Guest Editors