Skip to main content

2001 | Buch

Intelligent Agents VII Agent Theories Architectures and Languages

7th International Workshop, ATAL 2000 Boston, MA, USA, July 7–9, 2000 Proceedings

herausgegeben von: Cristiano Castelfranchi, Yves Lespérance

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Intelligent agents are one of the most important developments in computer science of the past decade. Agents are of interest in many important application areas, ranging from human-computer interaction to industrial process control. The ATAL workshop series aims to bring together researchers interested in the core/micro aspects of agent technology. Speci?cally, ATAL addresses issues such as theories of agency, software architectures for intelligent agents, methodologies and programming languages for r- lizing agents, and software tools for applying and evaluating agent systems. One of the strengthsoftheATALworkshopseriesisitsemphasisonthesynergiesbetweentheories, languages, architectures, infrastructures, methodologies, and formal methods. This year s workshop continued the ATAL trend of attracting a large number of high quality submissions. In more detail, 71 papers were submitted to the ATAL 2000 workshop, from 21 countries. After stringent reviewing, 22 papers were accepted for publication and appear in these proceedings. As with previous workshops in the series, we chose to emphasize what we perceive asimportantnewthemesinagentresearch. Thisyear sthemeswerebothassociatedwith the fact that the technology of intelligent agents and multi-agent systems is beginning to migrate from research labs to software engineering centers. As agents are deployed in applications such as electronic commerce, and start to take over responsibilities for their human users, techniques for controlling their autonomy become crucial. As well, the availability of tools that facilitate the design and implementation of agent systems becomes an important factor in how rapidly the technology will achieve widespread use.

Inhaltsverzeichnis

Frontmatter

Agent Theories I

Optimistic and Disjunctive Agent Design Problems
Abstract
Theagent designproblemis as follows:Givenanenvironment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In previous research, it was shown that for two important classes of tasks (where an agent was required to either achieve some state of affairs or maintain some state of affairs), the agent design problemwas pspace-complete. In this paper, we consider several important generalisations of such tasks. In an optimistic agent design problem, we simply ask whether an agent has at least some chance of bringing about a goal state. In a combined design problem, an agent is required to achieve some state of affairs while ensuring that some invariant condition is maintained. Finally, in a disjunctive design problem, we are presented with a number of goals and corresponding invariants—the aim is to design an agent that on any given run, will achieve one of the goals while maintaining the corresponding invariant. We prove that while the optimistic achievement and maintenance design problems are np-complete, the pspace-completeness results obtained for achievement and maintenance tasks generalise to combined and disjunctive agent design.
Michael Wooldridge, Paul E. Dunne
Updating Mental States from Communication
Abstract
In order to perform effective communication agents must be able to foresee the effects of their utterances on the addressee’s mental state. In this paper we investigate the update of the mental state of a hearer agent as a consequence of the utterance performed by a speaker agent. Given an agent communication language with a STRIPS-like semantics, we propose a set of criteria that allow the binding of the speaker’s mental state to its uttering of a certain sentence. On the basis of these criteria, we give an abductive procedure that the hearer can adopt to partially recognize the speaker’s mental state, on the basis of its utterances. This procedure can be adopted by the hearer to update its own mental state and its image of the speaker’s mental state.
A.F. Dragoni, P. Giorgini, L. Serafini
Sensing Actions, Time, and Concurrency in the Situation Calculus
Abstract
A formal framework for specifying and developing agents/robots must handle not only knowledge and sensing actions, but also time and concurrency. Researchers have extended the situation calculus to handle knowledge and sensing actions. Other researchers have addressed the issue of adding time and concurrent actions. We combine both of these features into a unified logical theory of knowledge, sensing, time, and concurrency. The result preserves the solution to the frame problem of previous work, maintains the distinction between indexical and objective knowledge of time, and is capable of representing the various ways in which concurrency interacts with time and knowledge.
Stephen Zimmerbaum, Richard Scherl

Agent Development Tools and Platforms

Developing Multiagent Systems with agentTool
Abstract
The advent of multiagent systems has brought together many disciplines and given us a new way to look at intelligent, distributed systems. However, traditional ways of thinking about and designing software do not fit the multiagent paradigm. This paper describes the Multiagent Systems Engineering (MaSE) methodology and agentTool, a tool to support MaSE. MaSE guides a designer from an initial system specification to implementation by guiding the designer through a set of inter-related graphically based system models. The underlying formal syntax and semantics of clearly and unambiguously ties them together as envisioned by MaSE.
Scott A. De Loach, Mark Wood
Layered Disclosure: Revealing Agents’ Internals
Abstract
A perennial challenge in creating and using complex autonomous agents is following their choices of actions as the world changes dynamically and understanding why they act as they do. This paper reports on our work to support human developers and observers to better follow and understand the actions of autonomous agents. We introduce the concept of layered disclosure by which autonomous agents have included in their architecture the foundations necessary to allow them to disclose upon request the specific reasons for their actions. Layered disclosure hence goes beyond standard plain code debugging tools. In its essence it also gives the agent designer the ability to define an appropriate information hierarchy, which can include agent-specific constructs such as internal state that persists over time. The user may request this information at any of the specified levels of detail, and either retroactively or while the agent is acting. We present layered disclosure as we created and implemented it in the simulated robotic soccer domain.We contribute the detailed design to support the application of layered disclosure to other agent domains. Layered disclosure played an important role in our successful development of the undefeated RoboCup champion CMUnited-99 multiagent team.
Patrick Riley, Peter Stone, Manuela Veloso
1. Architectures and Idioms: Making Progress in Agent Design
Abstract
This chapter addresses the problem of producing and maintaining progress in agent design. New architectures often hold important insights into the problems of designing intelligence. Unfortunately, these ideas can be difficult to harness, because on established projects switching between architectures and languages carries high cost. We propose a solution whereby the research community takes responsibility for re-expressing innovations as idioms or extensions of one or more standard architectures. We describe the process and provide an example — the concept of a Basic Reactive Plan. This idiom occurs in several influential agent architectures, yet in others is difficult to express.We also discuss our proposal’s relation to the the roles of architectures, methodologies and toolkits in the design of agents.
Joanna Bryson, Lynn Andrea Stein
Developing Multi-agent Systems with JADE
Abstract
JADE (Java Agent Development Framework) is a software framework to make easy the development of multi-agent applications in compliance with the FIPA specifications. JADE can then be considered a middle-ware that implements an efficient agent platform and supports the development of multi agent systems. JADE agent platform tries to keep high the performance of a distributed agent system implemented with the Java language. In particular, its communication architecture tries to offer flexible and efficient messaging, transparently choosing the best transport available and leveraging state-of-the-art distributed object technology embedded within Java runtime environment. JADE uses an agent model and Java implementation that allow good runtime efficiency, software reuse, agent mobility and the realization of different agent architectures.
Fabio Bellifemine, Agostino Poggi, Giovanni Rimassa

Agent Theories II

High-Level Robot Control through Logic
Abstract
This paper presents a programmable logic-based agent control system that interleaves planning, plan execution and perception. In this system, a program is a collection of logical formulae describing the agent’s relationship to its environment. Two such programs for a mobile robot are described — one for navigation and one for map building — that share much of their code. The map building program incorporates a rudimentary approach to the formalisation of epistemic fluents, knowledge goals, and knowledge producing actions.
Murray Shanahan, Mark Witkowski
Determining the Envelope of Emergent Agent Behaviour via Architectural Transformation
Abstract
In this paper we propose a methodology to help analyse tendencies in MAS to complement those of simple inspection, Monte Carlo and syntactic proof. We suggest an architecture that allows an exhaustive model-based search of possible system trajectories in significant fragments of a MAS using forward inference. The idea is to identify tendencies, especially emergent tendencies, by automating the search through possible parameterisations of the model and the choices made by the agents. Subsequently, a proof of these tendencies could be attempted over all possible conditions using syntactic proof procedures. Additionally, we propose and exemplify a computational procedure to help implement this. The strategy consists of: “un-encapsulating” the MAS so as to reveal and then exploit the maximum information about logical dependencies in the system. The idea is to make possible the complete exploration of model behaviour over a range of parameterisations and agent choices.
Oswaldo Terán, Bruce Edmonds, Steve Wallis

Models of Agent Communication and Coordination

Delegation and Responsibility
Abstract
An agent may decide to delegate tasks to others. The act of delegating a task by one autonomous agent to another can be carried out by the performance of one or more imperative communication acts. In this paper, the semantics of imperatives are specified using a language of actions and states. It is further shown how the model can be used to distinguish between whole-hearted and mere extensional satisfaction of an imperative, and how this may be used to specify the semantics of imperatives in agent communication languages.
Timothy J. Norman, Chris Reed
Agent Theory for Team Formation by Dialogue
Abstract
The process of cooperative problem solving can be divided into four stages. First, finding potential team members, then forming a team followed by constructing a plan for that team. Finally, the plan is executed by the team. Traditionally, protocols like the Contract Net protocol are used for performing the first two stages of the process. In an open environment however, there can be discussion among the agents in order to form a team that can achieve the collective intention of solving the problem. For these cases fixed protocols like contract net do not suffice. In this paper we present a theory for agents that are able to discuss the team formation and subsequently work as a team member until the collective goal has been fulfilled.We also present a solution, using structured dialogues, with an emphasis on persuasion, that can be shown to lead to the required team formation. The dialogues are described formally using modal logics and speech acts.
Frank Dignum, Barbara Dunin-Keplicz, Rineke Verbrugge
Task Coordination Paradigms for Information Agents
Abstract
In agent systems, different (autonomous) agents collaborate to execute complex tasks. Each agent provides a set of useful capabilities, and the agent system combines these capabilities as needed to perform complex tasks, based on the requests input into the system. Agent communication languages (ACLs) allow agents to communicate with each other about how to partition these tasks, and to specify the responsibilities of the individual agents that are invoked. Current ACLs make certain assumptions about the agent system, such as the stability of the agents, the lifetime of the tasks and the intelligence of the agents in the system, etc. These assumptions are not always applicable in information-centric applications, since such agent systems contain unreliable agents, very long running tasks, agents with widely varying levels of sophistication, etc. Furthermore, not all agents may be able to support intelligent planning towork around these issues, and precanned interactions used in more component-based systems do not work well. Thus, it becomes important that proper support for task coordination be available to these agent systems. In this paper we explore issues related to coordinating large, complex, long running tasks in agent systems. We divide these issues into the following categories: tasks, roles, and conversations. We then discuss how these issues impose requirements on ACL, and propose changes to support these requirements.
Marian Nodine, Damith Chandrasekara, Amy Unruh

Autonomy and Models of Agent Coordination

Plan Analysis for Autonomous Sociological Agents
Abstract
This paper is concerned with the problem of how effective social interaction arises from individual social action and mind. The need to study the individual social mind suggests a move towards the notion of sociological agents who can model their social environment as opposed to acting socially within it. This does not constrain social behaviour; on the contrary, we argue that it provides the requisite information and understanding for such behaviour to be effective. Indeed, it is not enough for agents to model other agents in isolation; they must also model the relationships between them. A sociological agent is thus an agent that can model agents and agent relationships. Several existing models use notions of autonomy and dependence to show how this kind of interaction comes about, but the level of analysis is limited. In this paper, we show how an existing agent framework leads naturally to the enumeration of a map of inter-agent relationships that can be modelled and exploited by sociological agents to enable more effective operation, especially in the context of multi-agent plans.
Michael Luck, Mark d’Inverno
Multiagent Bidding Mechanisms for Robot Qualitative Navigation
Abstract
This paper explores the use of bidding mechanisms to coordinate the actions requested by a group of agents in charge of achieving the task of guiding a robot towards a specified target in an unknown environment. This approach is based on a qualitative (fuzzy) approach to landmark-based navigation.
Carles Sierra, Ramon de López Màntaras, Dídac Busquets
Performance of Coordinating Concurrent Hierarchical Planning Agents Using Summary Information
Abstract
Recent research has provided methods for coordinating the individually formed concurrent hierarchical plans (CHiPs) of a group of agents in a shared environment. A reasonable criticism of this technique is that the summary information can grow exponentially as it is propagated up a plan hierarchy. This paper analyzes the complexity of the coordination problem to show that in spite of this exponential growth, coordinating CHiPs at higher levels is still exponentially cheaper than at lower levels. In addition, this paper offers heuristics, including “fewest threats first” (FTF) and “expand most threats first” (EMTF), that take advantage of summary information to smartly direct the search for a global plan. Experiments showthat for a particular domain these heuristics greatly improve the search for the optimal global plan compared to a “fewest alternatives first” (FAF) heuristic that has been successful in Hierarchical Task Network (HTN) Planning.
Bradley J. Clement, Edmund H. Durfee

Agent Languages

Agent Programming with Declarative Goals
Abstract
A long and lasting problem in agent research has been to close the gap between agent logics and agent programming frameworks. The main reason for this problem of establishing a link between agent logics and agent programming frameworks is identified and explained by the fact that agent programming frameworks have not incorporated the concept of a declarative goal. Instead, such frameworks have focused mainly on plans or goals-to-do instead of the end goals to be realised which are also called goals-to-be. In this paper, a new programming language called GOAL is introduced which incorporates such declarative goals. The notion of a commitment strategy - one of the main theoretical insights due to agent logics, which explains the relation between beliefs and goals - is used to construct a computational semantics for GOAL. Finally, a proof theory for proving properties of GOAL agents is introduced. An example program is proven correct by using this programming logic.
Koen V. Hindriks, Frank S. de Boer, Wiebe van der Hoek, John-Jules Ch. Meyer
Modeling Multiagent Systems with CASL - A Feature Interaction Resolution Application
Abstract
In this paper, we describe the CognitiveAgents Specification Language (CASL), and exhibit its characteristics by using it to model the multiagent feature interaction resolution system described by Griffeth andVelthuijsen [7]. We discuss the main features of CASL that make it a useful language for specifying and verifying multiagent systems. CASL has a nice mix of declarative and procedural elements with a formal semantics to facilitate the verification of properties of CASL specifications.
Steven Shapiro, Yves Lespérance
Generalised Object-Oriented Concepts for Inter-agent Communication
Abstract
In this paper, we describe a framework to program open societies of concurrently operating agents. The agents maintain a subjective theory about their environment and interact with each other via a communication mechanism suited for the exchange of information, which is a generalisation of the traditional rendez-vous communication mechanism from the object-oriented programming paradigm. Moreover, following object-oriented programming, agents are grouped into agent classes according to their particular characteristics; viz. the program that governs their behaviour, the language they employ to represent information and most interestingly the questions they can be asked to answer. We give and operational model of the programming language in terms of a transition system for the formal derivation of computations of multi-agent programs.
Rogier M. van Eijk, Frank S. de Boer, Wiebe van der Hoek, John-Jules Meyer
Specification of Heterogeneous Agent Architectures
Abstract
Agent-based software applications need to incorporate agents having heterogeneous architectures in order for each agent to optimally perform its task. HEMASL is a simple meta-language used to specify intelligent agents and multiagent systems when different and heterogeneous agent architectures must be used. HEMASLspecifications are based on an agent model that abstracts several existing agent architectures. The paper describes some of the features of the language, presents examples of its use and outlines its operational semantics. We argue that adding HEMASL to CaseLP, a specification and prototyping environment for MAS, can enhance its flexibility and usability.
Simone Marini, Maurizio Martelli, Viviana Mascardi, Floriano Zini

Planning, Decision Making, and Learning

Improving Choice Mechanisms within the BVG Architecture
Abstract
The BVG agent architecture relies on the use of values (multiple dimensions against which to evaluate a situation) to perform choice among a set of candidate goals. Choice is accomplished by using a calculus to collapse the several dimensions into a function that serialises candidates. In our previous experiments, we have faced decision problems only with perfect and complete information. In this paper we propose new experiments, where the agents will have to decide in the absence of all the needed and relevant information. In the BVG model, agents adjust their scale of values by feeding back evaluative information about the consequences of their decisions. We use the exact same measures to analyse the results of the experiments, thus providing a fair trial to the agents: they are judged with the same rules they can use for decision. Our method, based on values, is a novel approach for choice and an alternative to classical utilitarian theories.
Luis Antunes, João Faria, Helder Coelho
Planning-Task Transformations for Soft Deadlines
Abstract
Agents often have preference models that are more complicated than minimizing the expected execution cost. In this paper, we study how they should act in the presence of uncertainty and immediate soft deadlines. Delivery robots, for example, are agents that are often confronted with immediate soft deadlines. We introduce the additive and multiplicative planning-task transformations, that are fast representation changes that transform planning tasks with convex exponential utility functions to planning tasks that can be solved with variants of standard deterministic or probabilistic artificial intelligence planners. Advantages of our representation changes include that they are context-insensitive, fast, scale well, allow for optimal and near-optimal planning, and are grounded in utility theory. Thus, while representation changes are often used to make planning more efficient, we use them to extend the functionality of existing planners, resulting in agents with more realistic preference models.
Sven Koenig
An Architectural Framework for Integrated Multiagent Planning, Reacting, and Learning
Abstract
Dyna is a single-agent architectural framework that integrates learning, planning, and reacting.Well known instantiations of Dyna are Dyna-ACand Dyna-Q. Here a multiagent extension of Dyna-Q is presented. This extension, called MDyna-Q, constitutes a novel coordination framework that bridges the gap between plan-based and reactive coordination in multiagent systems.The paper summarizes the key features of Dyna, describes M-Dyna-Q in detail, provides experimental results, and carefully discusses the benefits and limitations of this framework.
Gerhard Weiß

Panel Summary: Agent Development Tools

Panel Summary: Agent Development Tools
Abstract
This panel (and a corresponding paper track) sought to examine the state of the art (or lack thereof) in tools for developing agents and agent systems. In this context, “tools” include complete agent programming environments, testbeds, environment simulators, component libraries, and specification tools. In the past few years, the field has gone from a situation where almost all implementations were created from scratch in general purpose programming languages, through the appearance of the first generally available public libraries (for example, the venerable Lockeed “KAPI” (KQML API) of the mid-90’s [10]), to full-blown GUI-supported development environments. For example, http://www.agentbuilder.com/AgentTools/ lists 25 commercial and 40 academic projects, many of which are publicly available. The sheer number of projects brings up many questions beyond those related to the tools themselves, and we put the following to our panel members:
Joanna Bryson, Keith Decker, Scott A. De Loach, Michael Huhns, Michael Wooldridge

Panel Summary: Autonomy —Theory, Dimensions, and Regulation

Again on Agents’ Autonomy: A Homage to AlanTuring — Panel Chair’s Statement
Abstract
“Autonomy”? Why wasting time with such a philosophical notion, such a ridiculous pretence typical of the arrogant ambitions of “Artificial Intelligence”? Is this a real scientific (and interesting) problem, or just “blah, blah”? Is this a relevant practical, technical issue? The answers to these questions are not obvious at all. Consider for example the animated discussion on the agents-list initiated by the following stimulating and radical intervention:
Cristiano Castelfranchi
Autonomy as Decision-Making Control
Abstract
Autonomy is a very complex concept. This discussion develops a definition for one dimension of autonomy: decision-making control. The development of this definition draws salient features from previous work. Each stage in the development of this definition is highlighted by bold text.
K. Suzanne Barber, Cheryl E. Martin
Autonomy: Theory, Dimensions, and Regulation
Abstract
The notion of autonomy is becoming a central and relevant topic in Agent and Multi Agent theories and applications. In very simple terms we can say that the concept of autonomy implies three main subjects (each of which should be thought in a very general sense): the main agent (the autonomy of which is considered), the resource (about which the first subject is considered autonomous), and a secondary agent (from which the main agent is considered autonomous). More in general, there are two main meanings of the concept of autonomy:
Rino Falcone
Situated Autonomy
Abstract
Our interest in autonomy is grounded in the context of an agent, a situation, and a goal. We limit our view of autonomy to an agent’s moment-to-moment action selection instead of a long-lived agent characteristic. Autonomy of an agent maps an agent, a situation, and a goal to a stance towards the goal such that the stance will be used to generate the most appropriate or the most relevant action for the agent [3]. At a coarse level the agent’s stance towards the goal will be whether to abandon it or to decide its overall position toward the goal: to make it an entirely personal goal, to make a goal for another agent, to collaborate with other agents with some level of responsibility, or to have some responsibility about it that is less than total responsibility. Responsibility for a goal is the amount of effort an agent is willing to spend on seeing to its accomplishment. At a finer level the agent’s stance will go beyond an overall position to include a degree of autonomy.
Henry Hexmoor
Autonomy: A Nice Idea in Theory
Abstract
Autonomy is perplexing. It is recognisably and undeniably a critical issue in the field of intelligent agents and multi-agent systems, yet it is often ignored or simply assumed. For many, agents are autonomous by definition, and they see no need to add the tautologous prefix in explicitly considering autonomous agents, while for others autonomy in agents is an important yet problematic issue that demands attention. The difficulty when considering autonomy, however, is that there are different conceptual levels at which to reason and argue, including the philosophical and the practical.
Michael Luck, Mark d’Inverno
Adjustable Autonomy: A Response
Abstract
Gaining a fundamental understanding of adjustable autonomy (AA) is critical if we are to deploy multi-agent systems in support of critical human activities. Indeed, our recent work with intelligent agents in the “Electric Elves” (E-Elves) system has convinced us that AA is a critical part of any human collaboration software. In the following, we first briefly describe E-Elves, then discuss AA issues in E-Elves.
Milind Tambe, David Pynadath, Paul Scerri
Backmatter
Metadaten
Titel
Intelligent Agents VII Agent Theories Architectures and Languages
herausgegeben von
Cristiano Castelfranchi
Yves Lespérance
Copyright-Jahr
2001
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-44631-6
Print ISBN
978-3-540-42422-2
DOI
https://doi.org/10.1007/3-540-44631-1