Springer International Publishing AG 2017
Katsuhide Fujita , Quan Bai , Takayuki Ito , Minjie Zhang , Fenghui Ren , Reyhan Aydogan and Rafik Hadfi (eds.) Modern Approaches to Agent-based Complex Automated Negotiation Studies in Computational Intelligence 10.1007/978-3-319-51563-2_1
Abstract
Existing trust and reputation management mechanisms for Multi-agent Systems have been focusing heavily on models that can produce accurate evaluations of the trustworthiness of trustee agents while trustees are passive in the evaluation process. To achieve a comprehensive trust management in complex multi-agent systems with subjective opinions, it is important for trustee agents to have mechanisms to learn, gain and protect their reputation actively. From this motivation, we introduce the BiTrust model, where both truster and trustee agents can reason about each other before making interaction. The experimental results show that the mechanism can overall improve the satisfaction of interaction and the stability of trustees reputation by filtering out non-beneficial partners.
Introduction
Nowadays, Multi-Agent Systems (MASs) have been perceived as a core technology for building diverse, heterogeneous, and distributed complex systems such as pervasive computing and peer-to-peer systems. However, the dynamism of MASs requires agents to have a mechanism to evaluate the trustworthiness other agents before each transaction. Trust was introduced to MASs as the expectation of an agent about the future performance of another agent. Thus, it is also considered as an effective tool to initiate interactions between agents. However, Sen [] stated that there are not enough research on the establishment, engagement, and use of trust. The establishment of trust can be seen as the flip side of the evaluation, which focuses on how trustees can gain trust from truster agents actively especially when the subjective opinions are ubiquitous. However, in many real-world circumstances, not only consumers concern about service providers reputation, but providers also care about who are making requests. For example, online auctions may fail due to the bad behaviours of bidders who refuse to pay after winning.
Obviously, since trust and reputation are crucial in the open and distributed environments, the classic single-sided trust evaluations are inadequate. It implies that truster agents also need to be evaluated for their credibility, i.e., identifying consumers behaviours. To address this, we introduce the BiTrust (Bijective Trust) model, which enables trustee agents (providers) to reason about truster agents (consumers) to improve the interaction satisfaction with a 2-layer evaluation filter. Specifically, in the first layer, providers evaluate the rating behaviour of the consumers. In the second layer, providers evaluate the utility gain of the transaction respected to their current expectation. The approach not only helps trustee agents to choose a suitable strategy for gaining trust from truster agents but also to protect their reputation actively. This paper steps toward comprehensive trust management in terms of establishing and using trust stated in []. The experimental results show several benefits from this model.
The remainder of this paper is structured as follows. Section by summarizing and highlighting the future work.
Related Work
In [] models reputation as a congestion game and develop a strategy for decision making of trustee agents in resource constraint environment. The paper proposes a trust management model called DRAFT, which can reduce reputation self-damaging problem actively by investigating the capability of trustee itself to decide whether to accept or deny incoming requests. However, it is still weak against requests from truster agents who have biased rating intention after their requests are accepted.
The FIRE []. However, the opinion filter approaches perform poorly when the percentage of the unfair raters increase. In this situation, without a strategic establishment of trust, trustee agents may fail to interact with potential trusters.
SRAC [].
Single-sided evaluations could bring the incentives for trustee agents to behave honestly, but not for truster agents, e.g., giving fair ratings. With the BiTrust model introduced in this paper, trustee agents can gain reputation in the protective manner through analysing truster behaviour, estimating the interaction utility, and filtering out non-beneficial transactions.
BiTrust: Bijection Trust Management Model
In the BiTrust model, trust each other is considered as the key to make interaction. It means both the truster and trustee agents need to trust their interacting partners because any damaging reputation will result in the reducing of future interaction. In the BiTrust model, agents have their own preference utility over potential partners. The truster agents utility function is controlled by parameters related to agents preferences. The system is assumed to have no centralized database for trust management. Trustee agents evaluate request makers based on their previous behaviours to decide whether to interact with them or not. Thus, BiTrust does not follow prevailing accept-when-request assumption, and can be distinguished from the DRAFT approach [] in that trustee agents accept an interaction based on trust-aware utility gain rather than assessment of their own capability limitation. Below we will give detail definitions used in the model.
3.1 Definitions
For the rest of this paper, we omit the terms trusters and trustees because there is no clear border between them since they both need to be trusted by one another in order to continue the transaction in our model. Instead, the terms consumer and provider will be used accordingly. Figure shows the conceptual architecture of an agent in the BiTrust model. Each agent (e.g.,
) has the following five components:
The public profile is a database contain reputation information (as a provider) and transaction records (
) of agent
that can be shared with other agents.
The private knowledge base is a database of each individual agent containing private information or learned experiences.
The trust reasoning and learning module collects information from databases and communication module to evaluate or learn about the trust of the interacting partner.
The decision module combines the information from three modules above to decide on which strategies and actions will be applied.
The communication module helps agent to interact with other agents and surrounding environment.
Fig. 1
The architecture of an individual agent in BiTrust
In this paper, an agent can act as either a consumer or a provider. In addition, an agent can also act as an advisor who gives opinions about other agents. However, we consider an adviser as a type of providers offering reference services which are also rated by consumers. Two advantages of this assumption are: (1) it helps diminish irresponsible advisers; and (2) it brings incentives to advisers to be honest as they are now providers associated with reputation values.