Skip to main content

2005 | Buch

Trusting Agents for Trusting Electronic Societies

Theory and Applications in HCI and E-Commerce

herausgegeben von: Rino Falcone, Suzanne Barber, Jordi Sabater-Mir, Munindar P. Singh

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Based on two international workshops on trust in agent societies, held at AAMAS 2003 and AAMAS 2004, this book draws together carefully revised papers on trust, reputation, and security in agent society. Besides workshop papers, several contributions from leading researchers in this interdisciplinary field were solicited to complete coverage of all relevant topics.

The 13 papers presented take into account issues from multiagent systems, artificial intelligence, cognitive science, game theory, and social and organizational science. Theoretical topics are addressed as well as applications in human-computer interaction and e-commerce.

Inhaltsverzeichnis

Frontmatter
Normative Multiagent Systems and Trust Dynamics
Abstract
In this paper we use recursive modelling to formalize sanction-based obligations in a qualitative game theory. In particular, we formalize an agent who attributes mental attitudes such as goals and desires to the normative system which creates and enforces its obligations. The wishes (goals, desires) of the normative system are the commands (obligations) of the agent. Since the agent is able to reason about the normative system’s behavior, our model accounts for many ways in which an agent can violate a norm believing that it will not be sanctioned. We thus propose a cognitive theory of normative reasoning which can be applied in theories requiring dynamic trust to understand when it is necessary to revise it.
Guido Boella, Leendert van der Torre
Toward Trustworthy Adjustable Autonomy in KAoS
Abstract
Trust is arguably the most crucial aspect of agent acceptability. At its simplest level, it can be characterized in terms of judgments that people make concerning three factors: an agent’s competence, its benevolence, and the degree to which it can be rapidly and reliably brought into compliance when things go wrong. Adjustable autonomy consists of the ability to dynamically impose and modify constraints that affect the range of actions that the human-agent team can successfully perform, consistently allowing the highest degrees of useful autonomy while maintaining an acceptable level of trust. Many aspects of adjustable autonomy can be addressed through policy. Policies are a means to dynamically regulate the behavior of system components without changing code or requiring the cooperation of the components being governed. By changing policies, a system can be adjusted to accommodate variations in externally imposed constraints and environmental conditions. In this paper we describe some important dimensions relating to autonomy and give examples of how these dimensions might be adjusted in order to enhance performance of human-agent teams. We introduce Kaa (KAoS adjustable autonomy) and provide a brief comparison with two other implementations of adjustable autonomy concepts.
Jeffrey M. Bradshaw, Hyuckchul Jung, Shri Kulkarni, Matthew Johnson, Paul Feltovich, James Allen, Larry Bunch, Nathanael Chambers, Lucian Galescu, Renia Jeffers, Niranjan Suri, William Taysom, Andrzej Uszok
Contract Nets for Evaluating Agent Trustworthiness
Abstract
In this paper we use a contract net protocol in order to compare various delegation strategies. We have implemented some different agents, having a set of tasks to delegate (or to perform by themselves); the tasks are performed by the agents in a dynamic environment, that can help or worse their activity. The agent rely upon different strategies in order to choose whom to delegate. We implemented three classes of trustiers: a random trustier (who randomly chooses the trustee whom delegate the task to); a statistical trustier (who builds the trustworthiness of other agents only on the basis of their previous performances); a cognitive trustier (who builds a sophisticated and cognitively motivated trust model of the trustee, taking into account its specific features, its ability and motivational disposition, and the impact of the environment on its performance). Our experiments show the advantage of using cognitive representations.
Rino Falcone, Giovanni Pezzulo, Cristiano Castelfranchi, Gianguglielmo Calvi
The EigenRumor Algorithm for Calculating Contributions in Cyberspace Communities
Abstract
This paper describes a method for scoring the degree of contribution of each information object and each participant in a cyberspace community, e.g., knowledge management and product reviews, or other information sharing communities. Two types of actions, i.e., information provisioning and information evaluation, are common in such communities and are valuable in scoring each contribution. The EigenRumor algorithm, proposed here, calculates the contribution scores based on a link analysis approach by considering these actions as links from participants to information objects. The algorithm has similarities to Kleinberg’s HITS algorithm in that both algorithms are based on the mutually reinforcing relationship of hubs and authorities but the EigenRumor model is not structured from page-to-page links but from participant-to-object links and is extended by the introduction of several new factors. The scores calculated by this algorithm can be used to identify “good” information and participants who contribute much to a community, which allows for the provisioning of incentives to such participants to promote their continuous contribution to the community.
Ko Fujimura, Naoto Tanimoto
A Temporal Policy for Trusting Information
Abstract
In making a decision, an agent requires information from other agents about the current state of its environment. Unfortunately, the agent can never know the absolute truth about its environment because the information it receives is uncertain. When the environment changes more rapidly than sources provide information, an agent faces the problem of forming its beliefs from information that may be out-of-date. This research reviews several logical policies for evaluating the trustworthiness of information; most importantly, this work introduces a new policy for temporal information trust assessment, basing an agent’s trust in information on its recentness. The belief maintenance algorithm described here values information against these policies and evaluates tradeoffs in cases of policy conflicts. The definition of a belief interval provides the agent with flexibility to acknowledge that a belief subject may be changing between belief revision instances. Since the belief interval framework describes the belief probability distribution over time, it allows the agent to decrease its certainty on its beliefs as they age.
Experimental results show the clear advantage of an algorithm that performs certainty depreciation over belief intervals and evaluates source information based in information age. This algorithm derives more accurate beliefs at belief revision and maintains more accurate belief certainty assessments as belief intervals age than an algorithm that is not temporally-sensitive.
Karen K. Fullam, K. Suzanne Barber
A Design Foundation for a Trust-Modeling Experimental Testbed
Abstract
Mechanisms for modeling trust and reputation to improve robustness and performance in multi-agent societies make up a growing field of research that has yet to establish unified direction or benchmarks. The trust research community will benefit significantly from the development of a competition testbed; such development is currently in progress under the direction of the Agent Reputation and Trust (art) Testbed initiative. A testbed can serve in two roles: 1) as a competition forum in which researchers can compare their technologies against objective metrics, and 2) as a suite of tools with flexible parameters, allowing researchers to perform easily-repeatable experiments. As a versatile, universal experimentation site, a competition testbed challenges researchers to solve the most prominent problems in the field, fosters a cohesive scoping of trust research problems, identifies successful technologies, and provides researchers with a tool for comparing and validating their approaches. In addition, a competition testbed places trust research in the public spotlight, improving confidence in the technology and highlighting relevant applications. This paper lays the foundation for testbed development by enumerating the important problems in trust and reputation research, describing important requirements for a competition testbed, and addressing necessary parameters for testbed modularity and flexibility. Finally, the art Testbed initiative is highlighted, and future progress toward testbed development is described.
Karen K. Fullam, Jordi Sabater-Mir, K. Suzanne Barber
Decentralized Reputation-Based Trust for Assessing Agent Reliability Under Aggregate Feedback
Abstract
Reputation mechanisms allow agents to establish trust in other agents’ intentions and capabilities in the absence of direct interactions. In this paper, we are concerned with establishing trust on the basis of reputation information in open, decentralized systems of interdependent autonomous agents. We present a completely decentralized reputation mechanism to increase the accuracy of agents’ assessments of other agents’ capabilities and allow them to develop appropriate levels of trust in each other as providers of reliable information. Computer simulations show the reputation system’s ability to track an agent’s actual capabilities.
Tomas B. Klos, Han La Poutré
A Trust Analysis Methodology for Pervasive Computing Systems
Abstract
We present an analysis Trust Analysis Methodology for finding trust issues within pervasive computing systems. It is based on a systematic analysis of scenarios that describe the typical use of the pervasive system by using a Trust Analysis Grid. The Trust Analysis Grid is composed of eleven Trust Issue Categories that cover the various aspects of the concept of trust in pervasive computing systems. The Trust Analysis Grid is then used to guide the design of the pervasive computing system.
Stéphane Lo Presti, Michael Butler, Michael Leuschel, Chris Booth
Decentralized Monitoring of Agent Communications with a Reputation Model
Abstract
Communication is essential in multi-agent systems, since it allows agents to share knowledge and to coordinate. However, in open multi-agent systems, autonomous and heterogeneous agents can dynamically enter or leave the system. It is then important to take into account that some agents may not respect – voluntarily or not – the rules that make the system function properly. In this paper, we propose a trust model for the reliability of agent communications. We define inconsistencies in the communications (represented as social commitments) in order to enable agents to detect lies and update their trust model of other agents. Agents can also use their trust model to decide whether to trust or not a new message sent by another agent.
Guillaume Muller, Laurent Vercouter
A Security Infrastructure for Trust Management in Multi-agent Systems
Abstract
Multi-agent systems are based on the interaction of autonomous software components, the agents, which cooperate to achieve common goals. But such interactions, just as in human societies, can be set correctly only on the base of trust relations. This paper presents a security model founded on delegation certificates, which allows the management of security policies on the base of trust relations among autonomous software agents, collaborating and competing in wide, open and evolving agent societies. On the converse, the presence of a strong and flexible security infrastructure is fundamental to develop trust in counterparts and start advantageous interactions with them. While some of these concepts are already being adopted into the development of the security layer for JADE, a standard-based and widely deployed framework to build multi-agent systems, this paper includes further ideas which distributed multi-agent security frameworks could benefit from.
Agostino Poggi, Michele Tomaiuolo, Giosuè Vitaglione
Why Trust Is Hard – Challenges in e-Mediated Services
Abstract
Design and maintenance of trustworthy electronically mediated services is a major challenge in supporting trust of future information systems supporting e-commerce as well as safety critical systems in our society. We propose a framework supporting a principled life cycle of e-services. Our application domain is distributed health care systems. We also include comparisons with other relevant approaches from trust in e-commerce and trust in agents.
Christer Rindebäck, Rune Gustavsson
A Protocol for a Distributed Recommender System
Abstract
We present a domain model and protocol for the exchange of recommendations by selfish agents without the aid of any centralized control. Our model captures a subset of the realities of recommendation exchanges in the Internet. We provide an algorithm that selfish agents can use for deciding whether to exchange recommendations and with whom. We analyze this algorithm and show that, under certain common circumstances, the agents’ rational choice is to exchange recommendations. Finally, we have implemented our model and algorithm and tested the performance of various populations. Our results show that both the social welfare and the individual utility of the agents is increased by participating in the exchange of recommendations.
José M. Vidal
Temptation and Contribution in C2C Transactions: Implications for Designing Reputation Management Systems
Abstract
A reputation management system can promote trust in transactions in an online consumer-to-consumer (C2C) market. We model a C2C market by employing an agent-based approach. To discuss the characteristics of goods traded on the market, we define temptation and contribution indexes based on the payoff matrix of a game. According to the results of a simulation conducted with the model, we find that a positive reputation management system can promote cooperative behavior in online C2C markets. Moreover, we also find that such a system is especially effective for an online C2C market where expensive physical goods are traded, whereas a negative reputation management system is effective for an online C2C market where information goods are traded.
Hitoshi Yamamoto, Kazunari Ishida, Toshizumi Ohta
Backmatter
Metadaten
Titel
Trusting Agents for Trusting Electronic Societies
herausgegeben von
Rino Falcone
Suzanne Barber
Jordi Sabater-Mir
Munindar P. Singh
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-31859-0
Print ISBN
978-3-540-28012-5
DOI
https://doi.org/10.1007/11532095

Premium Partner