Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 9th International RuleML Symposium, RuleML 2015, held in Berlin, Germany, in August 2015.

The 25 full papers, 4 short papers, 2 full keynote papers, 2 invited research track overview papers, 1 invited paper, 1 invited abstracts presented were carefully reviewed and selected from 63 submissions. The papers cover the following topics: general RuleML track; complex event processing track, existential rules and datalog+/- track; legal rules and reasoning track; rule learning track; industry track.



Invited Papers


The Herbrand Manifesto

Thinking Inside the Box

The traditional semantics for relational logic (sometimes called


semantics) is based on the notion of interpretations of constants in terms of objects external to the logic.


semantics is an alternative that is based on truth assignments for ground sentences without reference to external objects. Herbrand semantics is simpler and more intuitive than Tarskian semantics; and, consequently, it is easier to teach and learn. Moreover, it is stronger than Tarskian semantics. For example, while it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done easily with Herbrand semantics. The downside is a loss of some common logical properties, such as compactness and inferential completeness. However, there is no loss of inferential power—anything that can be deduced according to Tarskian semantics can also be deduced according to Herbrand semantics. Based on these results, we argue that there is value in using Herbrand semantics for relational logic in place of Tarskian semantics. It alleviates many of the current problems with relational logic and ultimately may foster a wider use of relational logic in human reasoning and computer applications.

Michael Genesereth, Eric Kao

Constraint Handling Rules - What Else?

Constraint Handling Rules (CHR) is both an effective concurrent declarative constraint-based programming language and a versatile computational formalism. While conceptually simple, CHR is distinguished by a remarkable combination of desirable features:

a semantic foundation in classical and linear logic,

an effective and efficient sequential and parallel execution model

guaranteed properties like the anytime online algorithm properties

powerful analysis methods for deciding essential program properties.

This overview of some CHR-related research and applications is by no means meant to be complete. Essential introductory reading for CHR provide the survey article [


] and the books [




]. Up-to-date information on CHR can be found online at the CHR web-page

, including the slides of the keynote talk associated with this article. In addition, the CHR website

offers everything you want to know about CHR, including online demo versions and free downloads of the language.

Thom Frühwirth

Consistency Checking of Re-engineered UML Class Diagrams via Datalog+/-

UML class diagrams (UCDs) are a widely adopted formalism for modeling the intensional structure of a software system. Although UCDs are typically guiding the implementation of a system, it is common in practice that developers need to recover the class diagram from an implemented system. This process is known as reverse engineering. A fundamental property of reverse engineered (or simply re-engineered) UCDs is consistency, showing that the system is realizable in practice. In this work, we investigate the consistency of re-engineered UCDs, and we show is


-complete. The upper bound is obtained by exploiting algorithmic techniques developed for conjunctive query answering under guarded Datalog+/-, that is, a key member of the Datalog+/- family of KR languages, while the lower bound is obtained by simulating the behavior of a polynomial space Turing machine.

Georg Gottlob, Giorgio Orsi, Andreas Pieris

A Brief Overview of Rule Learning

In this paper, we provide a brief summary of elementary research in rule learning. The two main research directions are descriptive rule learning, with the goal of discovering regularities that hold in parts of the given dataset, and predictive rule learning, which aims at generalizing the given dataset so that predictions on new data can be made. We briefly review key learning tasks such as association rule learning, subgroup discovery, and the covering learning algorithm, along with their most important prototypes. The paper also highlights recent work in rule learning on the Semantic Web and Linked Data as an important application area.

Johannes Fürnkranz, Tomáš Kliegr

Distribution and Uncertainty in Complex Event Recognition

Complex event recognition proved to be a valuable tool for a wide range of applications, reaching from logistics over finance to healthcare. In this paper, we reflect on some of these application areas to outline open research problems in event recognition. In particular, we focus on the questions of (1) how to distribute event recognition and (2) how to deal with the inherent uncertainty observed in many event recognition scenarios. For both questions, we provide a brief overview of the state-of-the-art and point out research gaps.

Alexander Artikis, Matthias Weidlich

General RuleML Track


Compact Representation of Conditional Probability for Rule-Based Mobile Context-Aware Systems

Context-aware systems gained huge popularity in recent years due to rapid evolution of personal mobile devices. Equipped with variety of sensors, such devices are sources of a lot of valuable information that allows the system to act in an intelligent way. However, the certainty and presence of this information may depend on many factors like measurement accuracy or sensor availability. Such a dynamic nature of information may cause the system not to work properly or not to work at all. To allow for robustness of the context-aware system an uncertainty handling mechanism should be provided with it. Several approaches were developed to solve uncertainty in context knowledge bases, including probabilistic reasoning, fuzzy logic, or certainty factors. In this paper, we present a representation method that combines strengths of rules based on the attributive logic and Bayesian networks. Such a combination allows efficiently encode conditional probability distribution of random variables into a reasoning structure called XTT2. This provides a method for building hybrid context-aware systems that allows for robust inference in uncertain knowledge bases.

Szymon Bobek, Grzegorz J. Nalepa

FOWLA, A Federated Architecture for Ontologies

The progress of information and communication technologies has greatly increased the quantity of data to process. Thus, managing data heterogeneity is a problem nowadays. In the 1980s, the concept of a Federated Database Architecture (FDBA) was introduced as a collection of components to unite loosely coupled federation. Semantic web technologies mitigate the data heterogeneity problem, however due to the data structure heterogeneity the integration of several ontologies is still a complex task. For tackling this problem, we propose a loosely coupled federated ontology architecture (FOWLA). Our approach allows the coexistence of various ontologies sharing common data dynamically at query execution through logical rules. We have illustrated the advantages of adopting our approach through several examples and benchmarks. We also compare our approach with other existing initiatives.

Tarcisio M. Farias, Ana Roxin, Christophe Nicolle

User Extensible System to Identify Problems in OWL Ontologies and SWRL Rules

The Semantic Web uses ontologies to associate meaning to Web content so machines can process it. One inherent problem to this approach is that, as its popularity increases, there is an ever growing number of ontologies available to be used, leading to difficulties in choosing appropriate ones. With that in mind, we created a system that allows users to evaluate ontologies/rules. It is composed by the Metadata description For Ontologies/Rules (MetaFOR), an ontology in OWL, and a tool to convert any OWL ontology to MetaFOR. With the MetaFOR version of an ontology, it is possible to use SWRL rules to identify anomalies in it. These can be problems already documented in the literature or user defined ones. SWRL is familiar to users, so it is easier to define new project specific anomalies. We present a case study where the system detects 9 problems, from the literature, and two user defined ones.

João Paulo Orlando, Mark A. Musen, Dilvan A. Moreira

Semantics of Notation3 Logic: A Solution for Implicit Quantification

Since the development of Notation3 Logic, several years have passed in which the theory has been refined and used in practice by different reasoning engines such as cwm, FuXi or EYE. Nevertheless, a clear model-theoretic definition of its semantics is still missing. This leaves room for individual interpretations and renders it difficult to make clear statements about its relation to other logics such as DL or FOL or even about such basic concepts as correctness. In this paper we address one of the main open challenges: the formalization of implicit quantification. We point out how the interpretation of implicit quantifiers differs in two of the above mentioned reasoning engines and how the specification, proposed in the W3C team submission, could be formalized. Our formalization is then put into context by integrating it into a model-theoretic definition of the whole language. We finish our contribution by arguing why universal quantification should be handled differently than currently prescribed.

Dörthe Arndt, Ruben Verborgh, Jos De Roo, Hong Sun, Erik Mannens, Rik Van De Walle

API4KP Metamodel: A Meta-API for Heterogeneous Knowledge Platforms

API4KP (API for Knowledge Platforms) is a standard development effort that targets the basic administration services as well as the retrieval, modification and processing of expressions in machine-readable languages, including but not limited to knowledge representation and reasoning (KRR) languages, within heterogeneous (multi-language, multi-nature) knowledge platforms. KRR languages of concern in this paper include but are not limited to RDF(S), OWL, RuleML and Common Logic, and the knowledge platforms may support one or several of these. Additional languages are integrated using mappings into KRR languages. A general notion of structure for knowledge sources is developed using monads. The presented API4KP metamodel, in the form of an OWL ontology, provides the foundation of an abstract syntax for communications about knowledge sources and environments, including a classification of knowledge source by mutability, structure, and an abstraction hierarchy as well as the use of performatives (inform, query, ...), languages, logics, dialects, formats and lineage. Finally, the metamodel provides a classification of operations on knowledge sources and environments which may be used for requests (message-passing).

Tara Athan, Roy Bell, Elisa Kendall, Adrian Paschke, Davide Sottara

Rule-Based Exploration of Structured Data in the Browser

We present Dexter, a browser-based, domain-independent structured-data explorer for users. Dexter enables users to explore data from multiple local and Web-accessible heterogeneous data sources such as files, Web pages, APIs and databases in the form of tables. Dexter’s users can also compute tables from existing ones as well as validate the tables (base or computed) through declarative rules. Dexter enables users to perform ad hoc queries over their tables with higher expressivity than that is supported by the underlying data sources. Dexter evaluates a user’s query on the client side while evaluating sub-queries on remote sources whenever possible. Dexter also allows users to visualize and share tables, and export (e.g., in JSON, plain XML, and RuleML) tables along with their computation rules. Dexter has been tested for a variety of data sets from domains such as government and apparel manufacturing. Dexter is available online at


Sudhir Agarwal, Abhijeet Mohapatra, Michael Genesereth, Harold Boley

PSOA2Prolog: Object-Relational Rule Interoperation and Implementation by Translation from PSOA RuleML to ISO Prolog

PSOA2Prolog consists of a multi-step source-to-source normalizer followed by a mapper to a pure (Horn) subset of ISO Prolog. We show the semantics preservation of the steps. Composing PSOA2Prolog and XSB Prolog, a fast Prolog engine, we achieved a novel instantiation of our PSOATransRun framework. We evaluated this interoperation and implementation technique with a suite of 30 test cases using 90 queries, and found a considerable speed-up compared to our earlier instantiation.

Gen Zou, Harold Boley

Similarity-Based Strict Equality in a Fully Integrated Fuzzy Logic Language

The extension of a given similarity relation

$$\mathcal R$$

between pairs of symbols of a particular alphabet to terms built with such symbols can be implemented at a very high abstract level by a set of fuzzy program rules defining a predicate called


. This predicate is defined for incorporating “Similarity-based Strict Equality” into the new fuzzy logic language


(acronym of “Fuzzy Aggregators and Similarity Into a Logic Language”) that we have recently developed in our research group.


aims to cope with implicit/explicit truth degree annotations, a great variety of connectives and unification by similarity. In this paper we show the benefits of using this sophisticated notion of equality which is somehow inspired by the so-called “Strict Equality” of functional and functional-logic languages with lazy semantics (e.g.:




respectively) and the “Similarity-based Equality” of fuzzy logic languages using weak unification (


$$\sim $$




), a notion beyond classic syntactic unification.

Pascual Julián-Iranzo, Ginés Moreno, Carlos Vázquez

Building a Hybrid Reactive Rule Engine for Relational and Graph Reasoning

The relational syntax used by Rete rule engines is cumbersome when traversing paths compared to hierarchical or Object-Oriented languages like XPath or Java. Searching the join space for references has performance implications. This paper proposes reactive rule engine enhancements to support both relational and graph reasoning, with improvements at both the language and the engine level.

The language will contain both relational and graph constructs that can be used together, within the same rule. The implementation targets Drools, a Java open-source Rete based rule engine, but could be applied to any Rete engine and language of a similar class.

Examples are used to describe the language extensions and discuss their behaviour. Benchmarking is used to compare the performance of the two reasoning approaches in different scenarios and provide recommendations on how to optimize a rule base.

Mario Fusco, Davide Sottara, István Ráth, Mark Proctor

Complex Event Processing Track


Using PSL to Extend and Evaluate Event Ontologies

The representation of events plays a key role in a wide range of Semantic Web applications, and several ontologies have been proposed to support this task. However, a review of existing event ontologies on the web reveals limited reasoning being done in their applications. To investigate this, we designed a set of reasoning problems (competency questions) aimed at providing a pragmatic assessment of the reasoning capabilities of three well-known Semantic Web event ontologies – SEM, The Event Ontology, and LODE. Using OWL and SWRL axiomatizations of the Process Specification Language (PSL) Ontology, we specify maximal extensions of the existing event ontologies. We then evaluate the resulting set of OWL and SWRL ontologies against our reasoning problems, using the results to both assess the abilities of existing Semantic Web event ontologies, and to explore the potential gains that may be achieved through additional axioms.

Megan Katsumi, Michael Grüninger

Probabilistic Event Pattern Discovery

Detecting occurrences of complex events in an event stream requires designing queries that describe real-world situations. However, specifying complex event patterns is a challenging task that requires domain and system specific knowledge. Novel approaches are required that automatically identify patterns of potential interest in a heavy flow of events.

We present and evaluate a probability-based approach for discovering frequent and infrequent sequences of events in an event stream. The approach was tested on a real-world dataset as well as on synthetically generated data with the task being the identification of the most frequent event patterns of a given length. The results were evaluated by measuring the values of Recall and Precision. Our experiments show that the approach can be applied to efficiently retrieve patterns based on their estimated frequencies.

Ahmad Hasan, Kia Teymourian, Adrian Paschke

How to Combine Event Stream Reasoning with Transactions for the Semantic Web

Semantic Sensor Web is a new trend of research integrating Semantic Web technologies with sensor networks. It uses Semantic Web standards to describe both the data produced by the sensors, but also the sensors and their networks, which enables interoperability of sensor networks, and provides a way to formally analyze and reason about these networks. Since sensors produce data at a very high rate, they require solutions to reason efficiently about what complex events occur based on the data captured. In this paper we propose

$$\mathcal {TR}^{ev}$$

as a solution to combine the detection of complex events with the execution of transactions for these domains.

$$\mathcal {TR}^{ev}$$

is an abstract logic to model and execute reactive transactions. The logic is parametric on a pair of oracles defining the basic primitives of the domain, which makes it suitable for a wide range of applications. In this paper we provide oracle instantiations combining RDF/OWL and relational database semantics for

$$\mathcal {TR}^{ev}$$

. Afterwards, based on these oracles, we illustrate how

$$\mathcal {TR}^{ev}$$

can be useful for these domains.

Ana Sofia Gomes, José Júlio Alferes

Existential Rules and Datalog+/- Track


Ontology-Based Multidimensional Contexts with Applications to Quality Data Specification and Extraction

Data quality assessment and data cleaning are context dependent activities. Starting from this observation, in previous work a context model for the assessment of the quality of a database was proposed. A context takes the form of a possibly virtual database or a data integration system into which the database under assessment is mapped, for additional analysis, processing, and quality data extraction. In this work, we extend contexts with dimensions, and by doing so, multidimensional data quality assessment becomes possible. At the core of multidimensional contexts we find ontologies written as Datalog

$$^\pm $$

programs with provably good properties in terms of query answering. We use this language to represent dimension hierarchies, dimensional constraints, dimensional rules, and specifying quality data. Query answering relies on and triggers dimensional navigation, and becomes an important tool for the extraction of quality data.

Mostafa Milani, Leopoldo Bertossi

Existential Rules and Bayesian Networks for Probabilistic Ontological Data Exchange

We investigate the problem of exchanging probabilistic data between ontology-based probabilistic databases. The probabilities of the probabilistic source databases are compactly and flexibly encoded via Bayesian networks, which are closely related to the management of provenance. For the ontologies and the ontology mappings, we consider existential rules from the Datalog+/– family. We analyze the computational complexity of the problem of deciding whether there exists a probabilistic (universal) solution for a given probabilistic source database relative to a (probabilistic) ontological data exchange problem. We provide a host of complexity results for this problem for different classes of existential rules. We also analyze the complexity of answering UCQs (unions of conjunctive queries) in this framework.

Thomas Lukasiewicz, Maria Vanina Martinez, Livia Predoiu, Gerardo I. Simari

Binary Frontier-Guarded ASP with Function Symbols

It has been acknowledged that emerging Web applications require features that are not available in standard rule languages like Datalog or Answer Set Programming (ASP), e.g., they are not powerful enough to deal with anonymous values (objects that are not explicitly mentioned in the data but whose existence is implied by the background knowledge). In this paper, we introduce a new rule language based on ASP extended with function symbols, which can be used to reason about anonymous values. In particular, we define

binary frontier-guarded programs


BFG programs

) that allow for disjunction, function symbols, and negation under the stable model semantics. In order to ensure decidability, BFG programs are syntactically restricted by allowing at most binary predicates and by requiring rules to be frontier-guarded. BFG programs are expressive enough to simulate ontologies expressed in popular Description Logics (DLs), capture their recent non-monotonic extensions, and can simulate conjunctive query answering over many standard DLs. We provide an elegant automata-based algorithm to reason in BFG programs, which yields a 3


upper bound for reasoning tasks like deciding consistency or cautious entailment. Due to existing results, these problems are known to be 2



Mantas Šimkus

Graal: A Toolkit for Query Answering with Existential Rules

This paper presents Graal, a java toolkit dedicated to ontological query answering in the framework of existential rules. We consider knowledge bases composed of data and an ontology expressed by existential rules. The main features of Graal are the following: a basic layer that provides generic interfaces to store and query various kinds of data, forward chaining and query rewriting algorithms, structural analysis of decidability properties of a rule set, a textual format and its parser, and import of OWL 2 files. We describe in more detail the query rewriting algorithms, which rely on original techniques, and report some experiments.

Jean-François Baget, Michel Leclère, Marie-Laure Mugnier, Swan Rocher, Clément Sipieter

Legal Rules and Reasoning Track


Input/Output STIT Logic for Normative Systems

In this paper we study input/output STIT logic. We introduce the semantics, proof theory and prove the completeness theorem. Input/output STIT logic has more expressive power than Makinson and van der Torre’s input/output logic. We show that input/output STIT logic is decidable and free from Ross’ paradox.

Xin Sun

Towards Formal Semantics for ODRL Policies

Most policy-based access control frameworks explicitly model whether execution of certain actions (read, write, etc.) on certain assets should be permitted or denied and usually assume that such actions are disjoint from each other, i.e. there does not exist any explicit or implicit dependency between actions of the domain. This in turn means, that conflicts among rules or policies can only occur if those contradictory rules or policies constrain the same action. In the present paper - motivated by the example of ODRL 2.1 as policy expression language - we follow a different approach and shed light on possible dependencies among actions of access control policies. We propose an interpretation of the formal semantics of general ODRL policy expressions and motivate rule-based reasoning over such policy expressions taking both explicit and implicit dependencies among actions into account. Our main contributions are (i) an exploration of different kinds of ambiguities that might emerge based on explicit or implicit dependencies among actions, and (ii) a formal interpretation of the semantics of general ODRL policies based on a defined abstract syntax for ODRL which shall eventually enable to perform rule-based reasoning over a set of such policies.

Simon Steyskal, Axel Polleres

Representing Flexible Role-Based Access Control Policies Using Objects and Defeasible Reasoning

Access control systems often use rule based frameworks to express access policies. These frameworks not only simplify the representation of policies, but also provide reasoning capabilities that can be used to verify the policies. In this work, we propose to use defeasible reasoning to simplify the specification of role-based access control policies and make them modular and more robust. We use the Flora-2 rule-based reasoner for representing a role-based access control policy. Our early experiments show that the wide range of features provided by Flora-2 greatly simplifies the task of building the requisite ontologies and the reasoning components for such access control systems.

Reza Basseda, Tiantian Gao, Michael Kifer, Steven Greenspan, Charley Chell

Explanation of Proofs of Regulatory (Non-)Compliance Using Semantic Vocabularies

With recent regulatory advances, modern enterprises have to not only comply with regulations but have to be prepared to provide explanation of proof of (non-)compliance. On top of compliance checking, this necessitates modeling concepts from regulations and enterprise operations so that stakeholder-specific and close to natural language explanations could be generated. We take a step in this direction by using Semantics of Business Vocabulary and Rules to model and map vocabularies of regulations and operations of enterprise. Using these vocabularies and leveraging proof generation abilities of an existing compliance engine, we show how such explanations can be created. Basic natural language explanations that we generate can be easily enriched by adding requisite domain knowledge to the vocabularies.

Sagar Sunkle, Deepali Kholkar, Vinay Kulkarni

Rule Learning Track


Rule Generalization Strategies in Incremental Learning of Disjunctive Concepts

Symbolic Machine Learning systems and applications, especially when applied to real-world domains, must face the problem of concepts that cannot be captured by a single definition, but require several alternate definitions, each of which covers part of the full concept extension. This problem is particularly relevant for incremental systems, where progressive covering approaches are not applicable, and the learning and refinement of the various definitions is interleaved during the learning phase. In these systems, not only the learned model depends on the order in which the examples are provided, but it also depends on the choice of the specific definition to be refined. This paper proposes different strategies for determining the order in which the alternate definitions of a concept should be considered in a generalization step, and evaluates their performance on a real-world domain dataset.

Stefano Ferilli, Andrea Pazienza, Floriana Esposito

Using Substitutive Itemset Mining Framework for Finding Synonymous Properties in Linked Data

Over the last two decades frequent itemset and association rule mining has attracted huge attention from the scientific community which resulted in numerous publications, models, algorithms, and optimizations of basic frameworks. In this paper we introduce an extension of the frequent itemset framework, called substitutive itemsets. Substitutive itemsets allow to discover equivalences between items, i.e., they represent pairs of items that can be used interchangeably in many contexts. In the paper we present basic notions pertaining to substitutive itemsets, describe the implementation of the proposed method available as a RapidMiner plugin, and illustrate the use of the framework for mining substitutive object properties in the Linked Data.

Mikołaj Morzy, Agnieszka Ławrynowicz, Mateusz Zozuliński

Learning Characteristic Rules in Geographic Information Systems

We provide a general framework for learning characterization rules of a set of objects in Geographic Information Systems (GIS) relying on the definition of distance quantified paths. Such expressions specify how to navigate between the different layers of the GIS starting from the target set of objects to characterize. We have defined a generality relation between quantified paths and proved that it is monotonous with respect to the notion of coverage, thus allowing to develop an interactive and effective algorithm to explore the search space of possible rules. We describe GISMiner, an interactive system that we have developed based on our framework. Finally, we present our experimental results from a real GIS about mineral exploration.

Ansaf Salleb-Aouissi, Christel Vrain, Daniel Cassard

Industry Track


Rule-Based Data Transformations in Electricity Smart Grids

The systems that will control future electricity networks (also referred to as Smart Grids) will be based on heterogeneous data models. Expressing transformation rules between different Smart Grid data models in well-known rule languages – such as Semantic Web Rule Language (SWRL) and Jena Rule Language (JRL) – will improve interoperability in this domain. Rules expressed in these languages can be easily reused in different applications, since they can be processed by freely available Semantic Web reasoners. In this way, it is possible to integrate heterogeneous Smart Grid systems without using costly ad-hoc converters. This paper presents a solution that leverages SWRL and JRL transformation rules to resolve existing mismatches between two of the most widely accepted standard data models in the Smart Grids.

Rafael Santodomingo, Mathias Uslar, Jose Antonio Rodríguez-Mondéjar, Miguel Angel Sanz-Bobi

Norwegian State of Estate: A Reporting Service for the State-Owned Properties in Norway

Statsbygg is the public sector administration company responsible for reporting the state-owned property data in Norway. Traditionally the reporting process has been resource-demanding and error-prone. The State of Estate (SoE) business case presented in this paper is creating a new reporting service by sharing, integrating and utilizing cross-sectorial property data, aiming to increase the transparency and accessibility of property data from public sectors enabling downstream innovation. This paper explains the ambitions of the SoE business case, highlights the technical challenges related to data integration and data quality, data sharing and analysis, discusses the current solution and potential use of rules technologies.

Ling Shi, Bjørg E. Pettersen, Ivar Østhassel, Nikolay Nikolov, Arash Khorramhonarnama, Arne J. Berre, Dumitru Roman

Ontology Reasoning Using Rules in an eHealth Context

Traditionally, nurse call systems in hospitals are rather simple: patients have a button next to their bed to call a nurse. Which specific nurse is called cannot be controlled, as there is no extra information available. This is different for solutions based on semantic knowledge: if the state of care givers (busy or free), their current position, and for example their skills are known, a system can always choose the best suitable nurse for a call. In this paper we describe such a semantic nurse call system implemented using the EYE reasoner and Notation3 rules. The system is able to perform OWL-RL reasoning. Additionally, we use rules to implement complex decision trees. We compare our solution to an implementation using OWL-DL, the Pellet reasoner, and SPARQL queries. We show that our purely rule-based approach gives promising results. Further improvements will lead to a mature product which will significantly change the organization of modern hospitals.

Dörthe Arndt, Ben De Meester, Pieter Bonte, Jeroen Schaballie, Jabran Bhatti, Wim Dereuddre, Ruben Verborgh, Femke Ongenae, Filip De Turck, Rik Van de Walle, Erik Mannens


Weitere Informationen

Premium Partner