Introduction
Computer programs commonly exhibit limited flexibility in their ability to handle unforeseen events and environmental conditions. That is, they can only act upon events and conditions they were designed for, and they do so only in the ways they were programmed to react upon those events. In many domains, such rigidity and program behavior address well the application requirements. For instance, a warehouse inventory management application needs to process only predefined inventory conditions and events. However, there are application domains in which rigidity of this type may negatively affect system behavior and even deem the system impractical. For example, an unmanned vehicle may face a variety of unanticipated conditions, e.g., changes in road shape and obstacle distribution. These may require new maneuvers and planning for which the vehicle was not pre-programmed. In light of the growing need for computer systems that can cope with dynamic, unpredictable environments, and to cope with ever more networked and distributed computing environments, agent-based systems have evolved, comprising software agents of various types and designs. In this chapter, we introduce such agents and agent-based systems.
A software agent is a software entity that performs tasks on behalf of another entity, be it a software, a hardware, or a human entity. This is a widely agreed-upon interpretation of the term agent in the context of software systems. This however leaves much freedom for further classification of agents. Common dimensions according to which such classification may be performed are autonomy and intelligence. That is, one may examine whether an agent is autonomous in its activity and whether its computation and actions exhibit intelligence.
With such dimensions in mind, an agent in its basic form is neither autonomous nor intelligent. For example, it may perform pre-defined tasks such as data collection and transmission as in the case of Simple Network Management Protocol (SNMP) agents [] (which are used for network management). The fact that the tasks are pre-defined leaves little freedom of action; hence, the agents autonomy is rather limited. The fact that the agent merely collects and transmits data leaves no room for intelligent manipulation; hence, the agent needs no intelligence.
Following these dimensions, more sophisticated agents may exhibit either autonomy or intelligence, or both. For example, Belief, Desire, Intention (BDI) agents [] are agents that are equipped with software layers specifically designed for intelligent reasoning and action. They maintain and manipulate plans and plan-relevant data and then execute their preferred plans to meet their goals. Such intelligent behavior is based on concepts such as belief, desire, intention, and goal, all of which are implemented as software artifacts within the agents. Thus, BDI agents exhibit both intelligence and autonomy.
Another important dimension of agenthood is sociality. Sociality refers to the ability of an agent to engage in meaningful interaction and collaboration with other agents and non-agent entities. For instance, an agent may need to execute a task that can be performed only in a collaborative manner. To collaborate, an agent must be able to communicate with, and understand, other agents. It may also need to negotiate, coordinate, and share resources. In some cases, agents may need to take part in a larger system comprised of multiple agents, referred to as a multi-agent system.
To facilitate the development of agents that exhibit such dimensions (and others), it is necessary to specify the dimensions, the underlying concepts, and the software constructs needed. In this chapter, we aim to briefly introduce these.
Dimensions of Agenthood
There are many dimensions of agenthood. Yet, there is no single set of dimensions that is widely agreed upon as the fundamental set for defining agents. Nevertheless, we refer here to a core set that we find central to the definition and the development of software agents. These include autonomy, intelligence, sociality, and mobility, on which we elaborate in the following.
2.1 Autonomy
Autonomy appears among the most important and distinctive agent properties. Autonomy refers to the ability of an agent to perform unsupervised computation and action and to pursue its goals without being explicitly programmed or instructed for doing so. Autonomy further refers to the encapsulation of data and functionality within the agent. This aspect of autonomy is however also present in objects as defined in object-oriented paradigms and is therefore not unique to agents. An autonomous agent is assumed to have full control of its internal state and its behaviors. To enable such autonomy, an agents blueprint should consist of components that support autonomy.
An important autonomy-enabling component is an internal state module. Such an internal state usually holds and maintains the state of the agent in its environment as perceived and interpreted by the agent itself. For example, an agent may believe that its physical location is at some ( x , y ) coordinate in a plane. Regardless of this being its true location, its internal state should hold that information and update it as the agent finds suit. An internal state of this sort facilitates autonomy as it allows the agent to act upon its state without being in need for external supervision. An internal state is also important for implementing artificial intelligence capabilities within the agent, as we discuss later in this chapter.
Agents additionally exhibit autonomy by implementing behaviors . A behavior is usually an activity which is comprised of more than one elementary action. It is commonly assumed to be initiated and controlled by the agent itself, without external instruction. Some behaviors may be iterative or continuous, while others are exercised in a one-shot fashion. Regardless, behaviors allow an agent to pursue its goals in an autonomous manner.