1. Introduction
This book aims to enhance the knowledge on the use of coalitions, which self-organize themselves, in order to manage problems that may appear in multiple disciplines. On the one hand we consider coalitions among members of a team that self-organize internally in order to maximize the group goal, but also keeping in mind the individual member goals. On the other hand, such coalitions can be also used to model, manage or evaluate complex problems modeled by disciplines like complex networks, cellular automata, multi-agent systems and game theory. In fact, one of the main goals of this volume is also to address the use of cooperative scenarios introducing coalitions in competitive games.
Complexity and its underlying theory refers to the study of many agents and their interactions. Here, the concept of agent is very broad and describe autonomous entities including animals, people, teams, organizations, etc. The system resulting from such interactions among the agents is denoted as a Multi-agent System (MAS).
Trying to model a large number of entities, and their non-linear interactions in a continuously changing environment, by means of classical mathematical tools can become a very hard task and many times impossible. Within this context, computer simulations become the natural tool for performing such kind of analysis and evaluation. In the last two decades, these interactions have been usually modeled by means of Agent-based Models (ABM), and simulated using Agent-based Simulations (ABS). The aim of these agent-based models and simulations is to better understand the real scenarios, analyzing their properties, strengths, weaknesses and limitations; but also to consider alternative worlds or agent societies that could have different configurations, rules or properties than the ones available in the real world. Summarizing Axelrod in the introduction to his book The Complexity of Cooperation []: the use of simulations can be considered a third method of doing science, contrasted with the two standard mathematical methods of induction and deduction. Induction aims to discover patterns in empirical data, while classical deduction involves a set of axioms (assumptions about the model) and the proving of propositions and theorems that can be derived from those assumptions. On the one hand, the use of agent-based modeling shares conceptual elements with the previous two methods, as it starts with a simple set of assumptions to design and run a simulation that afterwards produces data that can be analyzed inductively. On the other hand, it does not prove theorems, and the data is produced from simulations and not from real world measurements. Therefore, agent-based modeling and simulation aid intuition to analyze real or artificial models, and also can be used to discard certain models or assumptions that try to describe reality.
Simulations can be divided in two main categories []. In the first one, we can include those simulations that need to be very precise, providing a detailed image of reality; for instance, security simulations, aeronautics, army battles or evacuation scenarios usually demand a fine-grained well tuned simulation. In a second category, we find those simulations where the goal is to enhance our understanding of fundamental processes. In this case the use of simple assumptions is relevant in order to abstract unnecessary details that complicate the model but do not throw light over the system behavior. The simulations presented in this book belong to this second category, and the models run in the experiments try to be simple enough to abstract unnecessary details, but at the same time enhance our understanding of what happens at the microlevel, to explain the effects emerging at the macrolevel. In these simulation scenarios, even if the assumptions are simple, the results may be not; in fact, in some cases from a very simple set of rules can emerge a set of behaviors that can be extremely complex to analyze. Conways Game of Life is a good example of how very simple rules can generate very complex behavior, and universal computation in the ultimate term, using a simple cellular automata model.
Cellular automata are a class of systems that usually are described by a spatial discrete lattice of homogeneous cells, that take a finite number of discrete states, and also use discrete dynamics to update each cell state taking into account also the states of its local neighbors. Basically, they have been traditionally used for: parallel computation, to simulate discrete dynamical systems, to study pattern formation, to model fundamental physics and, of course, to study complexity. Due to this, cellular automata is a basic framework for several models described in this book.
Cellular automata can be basic model to describe homogeneous spatial interactions among agents, and the modeling of agent-based systems usually consider a collection of agents interacting in some specified way. To model such system interactions it is necessary to specify its topology (who interacts with whom) and the system dynamics (how the individual entities interact). Most complex systems have complicated non-regular topologies, that require a complex network framework for their representation. A complex network is a structure made up of nodes connected by one or more specific types of interdependency. Nodes represent individuals, groups, or organizations, while connections (links, edges or ties) represent relations such as friendship, economic deals, internet paths, neuron interactions, etc. The resulting graph-based structures are often very complex, being social networks the most popular application, but the analysis of these structures has carried out a great number of research papers in fields like: Economics, Telecommunications, Biology, Artificial Intelligence, Bioinformatics, Anthropology, Information Science, Social Psychology, Sociolinguistics, among others. Network Theory has emerged as a key technique to be applied in order to model, analyze, simulate and understand those complex network topologies; from a static and a dynamic point of view. In this book we will consider the static and dynamic perspectives allowing agents to interact with a certain neighborhood, and even to modify their neighbor set dynamically by means of partner switching.
Nowadays, the growing complexity of the ICT ecosystem and the appearance of concepts like sensor networks, traffic management, autonomic computing, ubiquitous computing, ambient intelligence, internet of things, etc.; needs new solutions to support the design and the analysis for autonomous, adaptive and robust complex distributed systems. It is unrealistic to be able to achieve distributed optimal control of such systems, and even more from a centralized point of view. This is not feasible because of the huge size of those systems, the unpredictability of their dynamic organization, their interactions with the environment, and the diversity of the goals pursued by the different devices. Self-organized models are potentially good candidates to understand such complex behavior, where emergent phenomena may appear from their numerous interacting components, and where self-organization can be a powerful tool to manage their complex behavior. Self-organization is a process that consist of a huge number of autonomous entities distributed over space, and connected locally or using a network topology, but with a limited communication range. The building blocks are autonomous entities inherently dynamic, which work distributed and decentralized in a loosely coupled model over a continuously changing environment. Due to these characteristics multi-agent systems and agent-based simulation have been a reference model to design and engineer self-organizing systems.