Skip to main content

Über dieses Buch

This book constitutes the proceedings of the 15th International Workshop on Computational Logic in Multi-Agent Systems, CLIMA XV, held in Prague, Czech Republic, in August 2014.

The 12 regular papers were carefully reviewed and selected from 20 submissions. The purpose of the CLIMA workshops is to provide a forum for discussing techniques, based on computational logic, for representing, programming and reasoning about agents and multi-agent systems in a formal way. This edition will feature two special sessions: logics for agreement technologies and logics for games, strategic reasoning, and social choice.



Main Session

On the Complexity of Two-Agent Justification Logic

Justification Logic provides a refined version of epistemic modal logic in which the proofs/justifications are taken into account. As a practical tool, Justification Logic has the ability to model argumentation and track evidence in the full logic context, to measure the complexity of the arguments, to keep the logical omniscience at bay, etc. The complexity of single-agent justification logics has been well-studied and shown to be generally lower than the complexity of their modal counterparts. In this paper we investigate the complexity of two-agent Justification Logic. We show that for most cases the upper complexity bounds established for the single-agent cases are maintained: these logics’ derivability problem is in the second step of the polynomial hierarchy. For certain logics, though, we discover a complexity jump to PSPACE-completeness, which is a new phenomenon for Justification Logic.
Antonis Achilleos

Fair Allocation of Group Tasks According to Social Norms

We consider the problem of decomposing a group norm into a set of individual obligations for the agents comprising the group, such that if the individual obligations are fulfilled, the group obligation is fulfilled. Such an assignment of tasks to agents is often subject to additional social or organisational norms that specify permissible ways in which tasks can be assigned. An important type of social norms are ‘fairness constraints’, that seek to distribute individual responsibility for discharging the group norm in a ‘fair’ or ‘equitable’ way. We propose a simple language for this kind of fairness constraints and analyse the problem of computing a fair decomposition of a group obligation, both for non-repeating and for repeating group obligations.
Natasha Alechina, Wiebe van der Hoek, Brian Logan

A Conceptual Model for Situated Artificial Institutions

Artificial institutions have been proposed to regulate the acting of the agents in open multi-agent systems (MAS). They are composed of abstractions such as norms, roles, goals, etc. In this paper, we say that an artificial institution is situated when the whole regulation that it performs is based on facts occurring in the environment where agents act. The conceiving of situated institutions is challenging as it requires to situate all abstractions possibly involved in the MAS regulation considering their different natures, semantics, life cycles, etc. This work introduces a conceptual model of a situated artificial institution (SAI), structured along two axes: norms and constitutive rules. While norms are based on status functions, the constitutive rules allow a SAI model to clearly state the conditions for an element of the environment to carry a status function. From a first version of a SAI specification language based on this conceptual model, we discuss its features and illustrate its dynamics through examples.
Maiquel de Brito, Jomi Fred Hübner, Olivier Boissier

Evolving Bridge Rules in Evolving Multi-Context Systems

In open environments, agents need to reason with knowledge from various sources, represented in different languages. Managed Multi-Context Systems (mMCSs) allow for the integration of knowledge from different heterogeneous sources in an effective and modular way, where so-called bridge rules express how information flows between the contexts. The problem is that mMCSs are essentially static as they were not designed to run in a dynamic scenario. Some recent approaches, among them evolving Multi-Context Systems (eMCSs), extend mMCSs by allowing not only the ability to integrate knowledge represented in heterogeneous KR formalisms, but at the same time to both react to, and reason in the presence of commonly temporary dynamic observations, and evolve by incorporating new knowledge. These approaches, however, only consider the dynamics of the knowledge bases, whereas the dynamics of the bridge rules, i.e., the dynamics of how the information flows, is neglected. In this paper, we fill this gap by building upon the framework of eMCSs by further extending it with the ability to update the bridge rules of each context taking into account an incoming stream of observed bridge rules. We show that several desirable properties are satisfied in our framework, and that the important problem of consistency management can be dealt with in our framework.
Ricardo Gonçalves, Matthias Knorr, João Leite

Logics for Agreement Technologies

Enumerating Extensions on Random Abstract-AFs with ArgTools, Aspartix, ConArg2, and Dung-O-Matic

We compare four different implementations of reasoning tools dedicated to Abstract Argumentation Frameworks. These systems are ArgTools, ASPARTIX, ConArg2, and Dung-O-Matic. They have been tested over three different models of randomly-generated graph models, corresponding to the Erdős-Rényi model, the Kleinberg small-world model, and the scale-free Barabasi-Albert model. This first comparison is useful to study the behaviour of these reasoners over networks with different topologies (including small-world ones): we scale the number of arguments to check the limits of today’s systems. Such results can be used to guide further improvements, specifically ConArg2, which we recently developed, and tested for the first time in this work.
Stefano Bistarelli, Fabio Rossi, Francesco Santini

Automated Planning of Simple Persuasion Dialogues

We take a simple form of non-adversarial persuasion dialogue in which one participant (the persuader) aims to convince the other (the responder) to accept the topic of the dialogue by asserting sets of beliefs. The responder replies honestly to indicate whether it finds the topic to be acceptable (we make no prescription as to what formalism and semantics must be used for this, only assuming some function for determining acceptable beliefs from a logical knowledge base). Our persuader has a model of the responder, which assigns probabilities to sets of beliefs, representing the likelihood that each set is the responder’s actual beliefs. The beliefs the persuader chooses to assert and the order in which it asserts them (i.e. its strategy) can impact on the success of the dialogue and the success of a particular strategy cannot generally be guaranteed (because of the uncertainty over the responder’s beliefs). We define our persuasion dialogue as a classical planning problem, which can then be solved by an automated planner to generate a strategy that maximises the chance of success given the persuader’s model of the responder; this allows us to exploit the power of existing automated planners, which have been shown to be efficient in many complex domains. We provide preliminary results that demonstrate how the efficiency of our approach scales with the number of beliefs.
Elizabeth Black, Amanda Coles, Sara Bernardini

Empirical Evaluation of Strategies for Multiparty Argumentative Debates

Debating agents have often different areas of expertise and conflicting opinions on the subjects under discussion. They are faced with the problem of deciding how to contribute to the current state of the debate in order to satisfy their personal goals. We focus on target sets, that specify minimal changes on the current state of the debate allowing agents to satisfy their goals, where changes are the addition and/or deletion of attacks among arguments. In this paper, we experimentally test a number of strategies based on target sets, and we evaluate them with respect to different criteria, as the length of the debate, the happiness of the agents, and the rationality of the result.
Dionysios Kontarinis, Elise Bonzon, Nicolas Maudet, Pavlos Moraitis

How to Build Input/Output Logic

In this paper we analyze various derivation rules of input/output logic in isolation and define the corresponding semantics. We develop fixed point characterizations for input/output logic involving rules of cumulative transitivity and present new completeness proofs. A toolbox to build input/output logic is therefore created. We use this toolbox to correct a hasty mistake appeared in the work of applying input/output logic to constitutive norms.
Xin Sun

Logics for Games, Strategic Reasoning, and Social Choice

The Problem of Judgment Aggregation in the Framework of Boolean-Valued Models

A framework for boolean-valued judgment aggregation is described. The simple (im)possibility results in this paper highlight the role of the set of truth values and its algebraic structure. In particular, it is shown that central properties of aggregation rules can be formulated as homomorphy or order-preservation conditions on the mapping between the power-set algebra over the set of individuals and the algebra of truth values. This is further evidence that the problems in aggregation theory are driven by information loss, which in our framework is given by a coarsening of the algebra of truth values.
Daniel Eckert, Frederik Herzberg

A Behavioral Hierarchy of Strategy Logic

Starting from the seminal work introducing Alternating Temporal Logic, formalisms for strategic reasoning have assumed a prominent role in multi-agent systems verification. Among the others, Strategy Logic (SL) allows to represent sophisticated solution concepts, by treating agent strategies as first-order objects.
A drawback from the high power of SL is to admit non-behavioral strategies: a choice of an agent, at a given point of a play, may depend on choices other agents can make in the future or in counterfactual plays. As the latter moves are unpredictable, such strategies cannot be easily implemented, making the use of the logic problematic in practice.
In this paper, we describe a hierarchy of SL fragments as syntactic restrictions of the recently defined Boolean-Goal Strategy Logic (SL[bg]). Specifically, we introduce Alternating-Goal Strategy Logic (SL[ag]) that, by imposing a suitable alternation over the nesting of the Boolean connectives in (SL[bg]), induces two dual chains of sets of formulas, the conjunctive and disjunctive ones. A formula belongs to the level i of the conjunctive chain if it just contains conjunctions of atomic goals together with a unique formula belonging to the disjunctive chain of level i − 1. The disjunctive chain is defined similarly. We formally prove that classic and behavioral semantics for (SL[ag]) coincide. Additionally, we study the related model-checking problem showing that it is 2ExpTime-complete.
Fabio Mogavero, Aniello Murano, Luigi Sauro

Synthesis and Verification of Uniform Strategies for Multi-agent Systems

We present a model checking algorithm for alternating-time temporal logic (ATL) with imperfect information and imperfect recall. This variant of ATL is arguably most appropriate when it comes to modeling and specification of multi-agent systems. The related variant of model checking is known to be theoretically hard (\(\Delta^{\rm P}_{2}\)- to pspace-complete, depending on the assumptions), but virtually no practical attempts at it have been proposed so far. Our algorithm searches through the set of possible uniform strategies, utilizing a simple reduction technique. In consequence, it not only verifies existence of a suitable strategy but also produces one (if it exists). We validate the algorithm experimentally on a simple scalable class of models, with promising results.
Jerzy Pilecki, Marek A. Bednarczyk, Wojciech Jamroga

Partial Information and Uniform Strategies

We present an alternating-time temporal epistemic logic with uniform strategies, interpreted in a novel way on transition systems for modelling situations in which agents with partial information interact to determine the way the system updates. This logic uATEL allows us to model what properties agents can enforce when they act according to strategies based on their knowledge. Apart from the usual memoryless strategies, we distinguish state-based memory, where agents recall the history of previous states, from perfect recall, where agents also recall their actions. We show that this makes a difference. Our logic includes three strategic operators for groups, representing the case where all the agents in the group cooperate actively, but do not share their knowledge, the case where some agents in the group may be passive, and the case where all the agents in the group share their knowledge. We include a detailed comparison to the literature on the subject.
Hans van Ditmarsch, Sophia Knight


Weitere Informationen

Premium Partner