Skip to main content
main-content

Über dieses Buch

This book constitutes the revised post-conference proceedings of the 16th European Conference on Multi-Agent Systems, EUMAS 2018, held at Bergen, Norway, in December 2018.
The 18 full papers presented in this volume were carefully reviewed and selected from a total of 34 submissions. The papers report on both early and mature research and cover a wide range of topics in the field of multi-agent systems.

Inhaltsverzeichnis

Frontmatter

Temporal Epistemic Gossip Problems

Abstract
Gossip problems are planning problems where several agents have to share information (‘secrets’) by means of phone calls between two agents. In epistemic gossip problems the goal can be to achieve higher-order knowledge, i.e., knowledge about other agents’ knowledge; to that end, in a call agents communicate not only secrets, but also agents’ knowledge of secrets, agents’ knowledge about other agents’ knowledge about secrets, etc. Temporal epistemic gossip problems moreover impose constraints on the times of calls. These constraints are of two kinds: either they stipulate that a call between two agents must necessarily be made at some time point, or they stipulate that a call can be made within some possible (set of) interval(s). In the non-temporal version, calls between two agents are either always possible or always impossible. We investigate the complexity of the plan existence problem in this general setting. Concerning the upper bound, we prove that it is in NP in the general case, and that it is in P when the problem is non-temporal and the goal is a positive epistemic formula. As for the lower bound, we prove NP-completeness for two fragments: problems with possibly negative goals even in the non-temporal case, and problems with temporal constraints even if the goal is a set of positive atoms.
Martin C. Cooper, Andreas Herzig, Frédéric Maris, Julien Vianey

Partial and Full Goal Satisfaction in the MUSA Middleware

Abstract
Classical goal-based reasoning frameworks for agents suppose goals are either achieved fully or not achieved at all: unless achieved completely, the agents have failed to address them. This behavior is different from how people do and therefore is far from real-world scenarios: in every moment a goal has reached a certain level of satisfaction.
This work proposes to extend the classical boolean definition of goal achievement by adopting a novel approach, the Distance to Goal Satisfaction, a metric to measure the distance to the full satisfaction of a logic formula.
In this paper we defined and implemented this metric; subsequently, we extended MUSA, a self-adaptive middleware used to engineer a heterogeneous range of applications. This extension allows solving real situations in which the full achievement represented a limitation.
Massimo Cossentino, Luca Sabatucci, Salvatore Lopes

Generalising the Dining Philosophers Problem: Competitive Dynamic Resource Allocation in Multi-agent Systems

Abstract
We consider a new generalisation of the Dining Philosophers problem with a set of agents and a set of resource units which can be accessed by them according to a fixed graph of accessibility between agents and resources. Each agent needs to accumulate a certain (fixed for the agent) number of accessible resource units to accomplish its task, and once it is accomplished the agent releases all resources and starts accumulating them again. All this happens in succession of discrete ‘rounds’ and yields a concurrent game model of ‘dynamic resource allocation’. We use the Alternating time Temporal Logic (ATL) to specify important properties, such as goal achievability, fairness, deadlock, starvation, etc. These can be formally verified using the efficient model checking algorithm for ATL. However, the sizes of the resulting explicit concurrent game models are generally exponential both in the number of resources and the number of agents, which makes the ATL model checking procedure generally intractable on such models, especially when the number of resources is large. That is why we also develop an abstract representation of the dynamic resource allocation models and develop a symbolic version of the model checking procedure for ATL. That symbolic procedure reduces the time complexity of model checking to polynomial in the number of resources, though it can take a worst-case double exponential time in the number of agents.
Riccardo De Masellis, Valentin Goranko, Stefan Gruner, Nils Timm

Interpreting Information in Smart Environments with Social Patterns

Abstract
Smart Environments (SEs) work in close interaction with their users. To perform properly, these systems need to process their information (both from sensing and to act) considering its meaning for people. For instance, when they manage workflows that represent users’ activities or consider the tradeoffs between alternative actions. Introducing social knowledge about the human context helps SEs to better interpret information, and thus people’ needs and actions. However, working with this knowledge faces several difficulties. Its level of abstraction is different from that directly related to system components. Moreover, it belongs to a background that is not frequent among engineers. In order to address these issues, this paper proposes the use of Social Context-Aware Assistants (SCAAs). These Multi-Agent Systems (MASs) manage explicitly social information using specifications conform to a domain-specific Modelling Language (ML). The ML aims at describing human aspects and their changes in a given context related to a SE. Social properties describe reusable knowledge using a template with these specifications and textual explanations. Working with the ML facilitates the semi-automated transformation of specifications to integrate social and other system information, derive new one, and check properties. Specific agents are responsible for managing information, and translating data from sensors to the ML, and from this to data for actuators. A case study on an alert system to monitor group activities, extended with social knowledge to interpret people’ behaviour, illustrates the approach.
Rubén Fuentes-Fernández, Jorge J. Gómez-Sanz

Learning Hedonic Games via Probabilistic Topic Modeling

Abstract
A usual assumption in the hedonic games literature is that of complete information; however, in the real world this is almost never the case. As such, in this work we assume that the players’ preference relations are hidden: players interact within an unknown hedonic game, of which they can observe a small number of game instances. We adopt probabilistic topic modeling as a learning tool to extract valuable information from the sampled game instances. Specifically, we employ the online Latent Dirichlet Allocation (LDA) algorithm in order to learn the latent preference relations in Hedonic Games with Dichotomous preferences. Our simulation results confirm the effectiveness of our approach.
Athina Georgara, Thalia Ntiniakou, Georgios Chalkiadakis

Learning Best Response Strategies for Agents in Ad Exchanges

Abstract
Ad exchanges are widely used in platforms for online display advertising. Autonomous agents operating in these exchanges must learn policies for interacting profitably with a diverse, continually changing, but unknown market. We consider this problem from the perspective of a publisher, strategically interacting with an advertiser through a posted price mechanism. The learning problem for this agent is made difficult by the fact that information is censored, i.e., the publisher knows if an impression is sold but no other quantitative information. We address this problem using the Harsanyi-Bellman Ad Hoc Coordination (HBA) algorithm [1, 3], which conceptualises this interaction in terms of a Stochastic Bayesian Game and arrives at optimal actions by best responding with respect to probabilistic beliefs maintained over a candidate set of opponent behaviour profiles. We adapt and apply HBA to the censored information setting of ad exchanges. Also, addressing the case of stochastic opponents, we devise a strategy based on a Kaplan-Meier estimator for opponent modelling. We evaluate the proposed method using simulations wherein we show that HBA-KM achieves substantially better competitive ratio and lower variance of return than baselines, including a Q-learning agent and a UCB-based online learning agent, and comparable to the offline optimal algorithm.
Stavros Gerakaris, Subramanian Ramamoorthy

Towards Online Electric Vehicle Scheduling for Mobility-On-Demand Schemes

Abstract
We study a setting where electric vehicles (EVs) can be hired to drive from pick-up to drop-off stations in a mobility-on-demand (MoD) scheme. Each point in the MoD scheme is equipped with battery charge facility to cope with the EVs’ limited range. Customer-agents announce their trip requests over time, and the goal for the system is to maximize the number of them that are serviced. In this vein, we propose two scheduling algorithms for assigning EVs to agents. The first one is efficient for short term reservations, while the second for both short and long term ones. While evaluating our algorithms in a setting using real data on MoD locations, we observe that the long term algorithm achieves on average 2.08% higher customer satisfaction and 2.87% higher vehicle utilization compared to the short term one for 120 trip requests, but with 17.8% higher execution time. Moreover, we propose a software package that allows for efficient management of a MoD scheme from the side of a company, and easy trip requests for customers.
Ioannis Gkourtzounis, Emmanouil S. Rigas, Nick Bassiliades

Two-Sided Markets: Mapping Social Welfare to Gain from Trade

Abstract
Though the definition of gain from trade extends the definition of social welfare from auctions to markets, from a mathematical point of view the additional dimension added by gain from trade makes it much more difficult to design a gain from trade maximizing mechanism. This paper provides a means of understanding when a market designer can choose the easier path of maximizing social welfare rather than maximizing gain from trade.
We provide and prove the first formula to convert a social welfare approximation bound to a gain from trade approximation bound that maintains the original order of approximation. This makes it possible to compare algorithms that approximate gain from trade with those that approximate social welfare. We evaluate the performance of our formula by using it to convert known social welfare approximation solutions to gain from trade approximation solutions. The performance of all known two-sided markets solutions (that implement truthfulness, IR, BB, and approximate efficiency) are benchmarked by both their theoretical approximation bound and their performance in practice. Surprisingly, we found that some social welfare solutions achieve a better gain from trade than other solutions designed to approximate gain from trade.
Rica Gonen, Ozi Egri

Affective Decision-Making in Ultimatum Game: Responder Case

Abstract
The paper focuses on prescriptive affective decision making in Ultimatum Game (UG). It describes preliminary results on incorporating emotional aspects into normative decision making. One of the players (responder) is modelled via Markov decision process. The responder’s reward function is the weighted combination of two components: economic and emotional. The first component represents pure monetary profit while the second one reflects overall emotional state of the responder. The proposed model is tested on simulated data.
Jitka Homolová, Anastasija Černecka, Tatiana V. Guy, Miroslav Kárný

Implementing Argumentation-Enabled Empathic Agents

Abstract
In a previous publication, we introduced the core concepts of empathic agents as agents that use a combination of utility-based and rule-based approaches to resolve conflicts when interacting with other agents in their environment. In this work, we implement proof-of-concept prototypes of empathic agents with the multi-agent systems development framework Jason and apply argumentation theory to extend the previously introduced concepts to account for inconsistencies between the beliefs of different agents. We then analyze the feasibility of different admissible set-based argumentation semantics to resolve these inconsistencies. As a result of the analysis, we identify the maximal ideal extension as the most feasible argumentation semantics for the problem in focus.
Timotheus Kampik, Juan Carlos Nieves, Helena Lindgren

Towards Fully Probabilistic Cooperative Decision Making

Abstract
Modern prescriptive decision theories try to support the dynamic decision making (DM) in incompletely-known, stochastic, and complex environments. Distributed solutions single out as the only universal and scalable way to cope with DM complexity and with limited DM resources. They require a solid cooperation scheme, which harmonises disparate aims and abilities of involved agents (human decision makers, DM realising devices and their mixed groups). The paper outlines a distributed fully probabilistic DM. Its flat structuring enables a fully-scalable cooperative DM of adaptive and wise selfish agents. The paper elaborates the cooperation based on sharing and processing agents’ aims in the way, which negligibly increases agents’ deliberation effort, while preserving advantages of distributed DM. Simulation results indicate the strength of the approach and confirm the possibility of using an agent-specific feedback for controlling its cooperation.
Miroslav Kárný, Zohreh Alizadeh

Endorsement in Referral Networks

Abstract
Referral networks is an emerging research area in the intersection of Active Learning and Multi-Agent Systems where experts—humans or automated agents—can redirect difficult instances (tasks or queries) to appropriate colleagues. Learning-to-refer involves estimating topic-conditioned skills of colleagues connected through a referral network for effective referrals. Proactive skill posting is a learning setting where experts are allowed a one-time local network advertisement of a subset of their top skills. The learning challenge is exploiting partially available (potentially noisy) self-skill estimates, including adversarial strategic lying to attract unwarranted referrals. In this paper, we introduce the notion of endorsement typically found in professional networks where one colleague endorses another on particular topic(s). We first augment proactive skill posting with endorsements and propose modifications to existing algorithms to take advantage of such endorsements, penalizing subsequent referrals to agents with bogus skill reporting. Our results indicate that truthful endorsements improve performance as they act as an additional cushion to early failures of strong experts. When combined with truthful endorsements, extensive empirical evaluations indicate performance improvement in proactive-DIEL and \(\epsilon \)-Greedy in both market-aware and market-agnostic skill posting setting while retaining desirable properties like tolerance to noisy self-skill estimates and strategic lying.
Ashiqur R. KhudaBukhsh, Jaime G. Carbonell

Markov Chain Monte Carlo for Effective Personalized Recommendations

Abstract
This paper adopts a Bayesian approach for finding top recommendations. The approach is entirely personalized, and consists of learning a utility function over user preferences via employing a sampling-based, non-intrusive preference elicitation framework. We explicitly model the uncertainty over the utility function and learn it through passive user feedback, provided in the form of clicks on previously recommended items. The utility function is a linear combination of weighted features, and beliefs are maintained using a Markov Chain Monte Carlo algorithm. Our approach overcomes the problem of having conflicting user constraints by identifying a convex region within a user’s preferences model. Additionally, it handles situations where not enough data about the user is available, by exploiting the information from clusters of (feature) weight vectors created by observing other users’ behavior. We evaluate our system’s performance by applying it in the online hotel booking recommendations domain using a real-world dataset, with very encouraging results.
Michail-Angelos Papilaris, Georgios Chalkiadakis

Computing Consensus: A Logic for Reasoning About Deliberative Processes Based on Argumentation

Abstract
Argumentation theory can encode an agent’s assessment of the state of an exchange of points of view. We present a conservative model of multiple agents potentially disagreeing on the views presented during a process of deliberation. We model this process as iteratively adding points of view (arguments), or aspects of points of view. This gives rise to a modal logic, deliberative dynamic logic, which permits us to reason about the possible developments of the deliberative state. The logic we propose applies to all natural semantics of argumentation theory. Furthermore, under a very weak assumption that the consensus considered by a group of agents is faithful to their individual views, we show that model checking these models is feasible, as long as the argumentation frameworks, which may be infinite, does not have infinite branching.
Sjur Dyrkolbotn, Truls Pedersen

Decentralized Multiagent Approach for Hedonic Games

Abstract
We propose a novel, multi-agent, decentralized approach for hedonic coalition formation games useful for settings with a large number of agents. We also propose three heuristics which, can be coupled with our approach to find sub-coalitions that prefer to “bud off” from an existing coalition. We found that our approach when compared to random partition formation gives better results which further improve when it is coupled with the proposed heuristics. As matching problems are a common type of hedonic games, we have adapted our approach for two matching problems: roommate matching and bipartite matching. Our method does well for additively separable hedonic games, where finding the optimal partition is NP-hard, and gives near optimal results for matching problems.
Kshitija Taywade, Judy Goldsmith, Brent Harrison

Deep Reinforcement Learning in Strategic Board Game Environments

Abstract
In this paper we propose a novel Deep Reinforcement Learning (DRL) algorithm that uses the concept of “action-dependent state features”, and exploits it to approximate the Q-values locally, employing a deep neural network with parallel Long Short Term Memory (LSTM) components, each one responsible for computing an action-related Q-value. As such, all computations occur simultaneously, and there is no need to employ “target” networks and experience replay, which are techniques regularly used in the DRL literature. Moreover, our algorithm does not require previous training experiences, but trains itself online during game play. We tested our approach in the Settlers Of Catan multi-player strategic board game. Our results confirm the effectiveness of our approach, since it outperforms several competitors, including the state-of-the-art jSettler heuristic algorithm devised for this particular domain.
Konstantia Xenou, Georgios Chalkiadakis, Stergos Afantenos

Counterfactually Fair Prediction Using Multiple Causal Models

Abstract
In this paper we study the problem of making predictions using multiple structural causal models defined by different agents, under the constraint that the prediction satisfies the criterion of counterfactual fairness. Relying on the frameworks of causality, fairness and opinion pooling, we build upon and extend previous work focusing on the qualitative aggregation of causal Bayesian networks and causal models. In order to complement previous qualitative results, we devise a method based on Monte Carlo simulations. This method enables a decision-maker to aggregate the outputs of the causal models provided by different agents while guaranteeing the counterfactual fairness of the result. We demonstrate our approach on a simple, yet illustrative, toy case study.
Fabio Massimo Zennaro, Magdalena Ivanovska

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise