Skip to main content
main-content

Über dieses Buch

This volume constitutes the refereed proceedings of ten international workshops, OTM Academy, Industry Case Studies Program, EI2N, INBAST, Meta4eS, OnToContent, ORM, SeDeS, SINCOM and SOMOCO 2012, held as part of OTM 2012 in Rome, Italy, in September 2012. The 66 revised full papers presented were carefully reviewed and selected from a total of 127 submissions. The volume also includes 7 papers from the On the Move Academy (OTMA) 2012 as well as 4 CoopIS 2012 poster papers and 5 ODBASE 2012 poster papers. The paper cover various aspects of computer supported cooperative work (CSCW), middleware, Internet/Web data management, electronic commerce, enterprise modelling, workflow management, knowledge flow, agent technologies, information retrieval, software architectures, service-oriented computing, and cloud computing.

Inhaltsverzeichnis

Frontmatter

On The Move Academy (OTMA) 2012

The 9th OnTheMove Academy Chairs’ Message

The term ‘academy’, originating from Greek antiquity, implies a strong mark of

quality and excellence in higher education and research

that is upheld by its members. This label is equally valid in our context. OTMA Ph.D. students get the opportunity of publishing in a highly reputed publication channel, namely the Springer LNCS OTM workshops proceedings. The OTMA faculty members, who are well-respected researchers and practitioners, critically reflect on the students work in a highly positive and inspiring atmosphere, so that the students can improve not only their research capacities but also their presentation and writing skills. OTMA participants also learn how to review scientific papers. And they enjoy ample possibilities to build and expand their professional network. This includes personal feedback and exclusive time of prominent experts on-site. In addition, thanks to an OTMA LinkedIn-group the students can stay in touch with all OTMA participants and interested researchers. And last but not least, an ECTS credit certificate rewards their hard work.

Peter Spyns, Anja Metzner

Improving Efficiency of Data Intensive Applications on GPU Using Lightweight Compression

In many scientific and industrial applications GPGPU (General-Purpose Computing on Graphics Processing Units) programming reported excellent speed-up when compared to traditional CPU (central processing unit) based libraries. However, for data intensive applications this benefit may be much smaller or may completely disappear due to time consuming memory transfers. Up to now, gain from processing on the GPU was noticeable only for problems where data transfer could be compensated by calculations, which usually mean large data sets and complex computations. This paper evaluates a new method of data decompression directly in GPU shared memory which minimizes data transfers on the path from disk, through main memory, global GPU device memory, to GPU processor. The method is successfully applied to pattern matching problems. Results of experiments show considerable speed improvement for large and small data volumes which is a significant step forward in GPGPU computing.

Piotr Przymus, Krzysztof Kaczmarski

Towards a Trust and Reputation Framework for Social Web Platforms

Trust and Reputation Systems (TRSs) represent a significant trend in decision support for Internet-based interactions. They help users to decide whom to trust and how much to trust a transaction. They are also an effective mechanism to encourage honesty and cooperation among users, resulting in healthy online markets or communities. The basic idea is to let parties rate each other so that new public knowledge can be created from personal experiences. The major difficulty in designing a reputation system is making it robust against malicious attacks. Our contribution in this paper is twofold. Firstly, we combine multiple research agendas into a holistic approach to building a robust TRS. Secondly, we focus on one TRS component which is the reputation computing engine and provide a novel investigation into an implementation of the engine proposed in [7].

Thao Nguyen, Luigi Liquori, Bruno Martin, Karl Hanks

Social Emergent Semantics for Personal Data Management

In order use our personal data within our day to day activities, we need to manage it in a way that is easy to consume, which currently is not an easy task. People have found their own ways to organize their personal data, such as categorizing files in folders, labeling emails etc. This is acceptable to a certain degree, since we have to deal with have some (human) difficulties such as our limited capacity of categorization and our incapacity of maintaining highly structured artifacts for long periods of time. We believe that to organize this great amount of personal data, we need the help of our communities. In this work, we apply the emergent semantics field to personal data management, aiming to decrease our cognitive efforts spent in simple tasks, handling semantic evolution in conjunction with our close peers.

Cristian Vasquez

Extracting a Causal Network of News Topics

Because of the abundance of online news, it is impossible for users to process all the available information. Tools are needed to help process this information. To mitigate this challenge we propose generating a network of causally related news topics to help the user understand and navigate throughout the news. We assume that by providing the causes or effects of a news topics, the user will be able to relate current news to past news topics that the user knows about, or that the user will discover past news topics as currently relevant. Also, the additional context will facilitate the understanding of the current news topic.

To generate the causal network, information is extracted from several distributed news sources while maintaining important journalistic features such as source referencing and author attribution. We propose ranking different causes of an event, to provide a more intuitive summary of multiple causal relations.

To make the network easily understandable, news topics must be represented in a format that can be causally related therefore, a news topic model is proposed. The model is based on the phrases used by online news sources to describe an event or activities, during a limited time-frame. To maintain usability the results must be provided in a timely manner from streaming sources and in an easy to understand format.

Eduardo Jacobo Miranda Ackerman

Towards Model-Driven Requirements Analysis for Context-Aware Well-Being Systems

Over the years, the interest in the field of pervasive computing has increased. A specific class of applications in this domain is that of context-aware applications. These programs utilize context information to adapt to their current environment. This quality can be used, among others, when dealing with health care and well-being situations. However, as the user requirements for these specific applications are almost never well-specified, there is a real risk that the resulting application does not offer the right set of features to the user. In order to mitigate this risk, we propose a model-driven method of requirements engineering for systems in the domain of context-aware well-being applications. This method will result in an explicit specification of requirements, and an improved alignment of user requirements and system features. Furthermore, due to the model-driven character of the method, the artifacts created during the requirements engineering phase of the development process can directly be incorporated in the subsequent development steps.

Steven Bosems

OWL 2: Towards Support for Context Uncertainty, Fuzziness and Temporal Reasoning

Context-awareness plays a vital role in pervasive computing. Context-aware applications relay on context information in order to provide appropriate and consistent adaptive services by integrating sensor data from a diverse range of sources with varying degree of accuracy, precision, dynamism, and are failure prone. Therefore, context information are inherently imperfect, they exhibit various types of uncertainties: incomplete, imprecise. vague, inconsistency, and/or temporal. Therefore context-aware applications must be supported by an adequate context information modeling and reasoning formalism.

Wilbard Nyamwihula, Burchard Bagile

Semantic Web Technologies’ Role in Smart Environments

Today semantic web technologies and Linked Data principles are providing formalism, standards, shared data semantics and data integration for unstructured data over the web. The result is a transformation from the Web of Interaction to the Web of Data and actionable information. On the crossroad lies our daily lives, containing plethora of unstructured data which is originating from low cost sensors and appliances to every computational element used in our modern lives, including computers, interactive watches, mobile phones, GPS devices etc. These facts accentuate an opportunity for system designers to combine these islands of data into a large actionable information space which can be utilized by automated and intelligent agents. As a result, this phenomenon is likely to institute a space that is smart enough to provide humans with comfort of living and to build an efficient society. Thus, in this context, the focus of my research has been to propose solutions to the problems in the domains of smart environment and energy management, under the umbrella of ambient intelligence. The potential role of semantic web technologies in these proposed solutions has been analyzed and architectures for these solutions were designed, implemented and tested.

Faisal Razzak

OTM Industry Case Studies Program 2012

OTM Industry Program 2012 PC Co-chairs Message

Cloud computing, service-oriented architecture, business process modelling, enterprise architecture, enterprise integration, semantic interoperabilitywhat is an enterprise systems administrator to do with the constant stream of industry hype surrounding him, constantly bathing him with (apparently) new ideas and new technologies? It is nearly impossible, and the academic literature does not help solving the problem, with hyped technologies catching on in the academic world just as easily as the industrial world. The most unfortunate thing is that these technologies are actually useful, and the press hype only hides that value.

Herv Panetto

A Toolkit for Choreographies of Services: Modeling, Enactment and Monitoring

The support of the business community has considerably urged the advancement of the SOA by bringing useful supporting standards, for instance for Web Services description and collaboration design. Nevertheless, as the platforms are getting wider over geographically distant locations, there is a real need of keeping the link between the design time and the runtime. Model to model approaches ensure this kind of link, and in this context, choreography models help answering such issues. We bring our know how as ESB and BPM experts and propose an open source toolkit for services choreography. This toolkit provides a way to design a choreography, execute it on an Enterprise Service Bus and finally to monitor it. A Model to Model (M2M) top-down approach is implemented. We illustrate our purpose thanks to a business use case inspired from the CHOReOS European Project.

Amira Ben Hamida, Julien Lesbegueries, Nicolas Salatgé, Jean-Pierre Lorré

Requirements towards Effective Process Mining

Process mining is prominent contemporary research topic. This paper describes requirements to be fulfilled for effective practical adoption on the basis of application scenarios and a sample project.

Matthias Lohrmann, Alexander Riedel

Industrial Session

Specialization of a Fundamental Ontology for Manufacturing Product Lifecycle Applications: A Case Study for Lifecycle Cost Assessment

This paper aims to study the specialization of a fundamental ontology describing manufacturing product lifecycle applications. This specialization is conducted through a case study of an Italian company. On the one hand this specialization is meant to define a specific ontology for LCC applications and on the other hand it aims to validate the mapping with the fundamental ontology implemented for manufacturing product lifecycle applications.

Ana Milicic, Apostolos Perdikakis, Soumaya El Kadiri, Dimitris Kiritsis, Sergio Terzi, Paolo Fiordi, Silvia Sadocco

Information Infrastructures for Utilities Management in the Brewing Industry

There is an increasing focus on sustainability in manufacturing industries. Operations management and plant/process control have a significant impact on production efficiency and hence environmental footprint. Information systems are an increasingly important tool for monitoring, managing and optimising production efficiency and resource consumption.

An advanced Utilities Management System (UMS), that operates on the G2

®

real-time intelligent systems platform, has been developed at the Yatala brewery, Australia. An important characteristic of the UMS is its strong integration with the existing information and automation systems at the plant. The tight integration was required to maximise effectiveness and ease of use as well as to minimise development effort and cost.

Michael Lees, Robert Ellen, Marc Steffens, Paul Brodie, Iven Mareels, Rob Evans

Internal Logistics Integration by Automated Storage and Retrieval Systems: A Reengineering Case Study

Nowadays, factors like globalization, productivity, and reduction of time-to-market make the impact of logistics on production by far wider than in the past. Such a complex scenario originated considerable interest for the design, planning and control of warehousing systems as new research topics (De Koster et al. 2007). However, in spite of the importance of warehouse design and management, authors agree on the lack of systematic approaches (Baker and Canessa, 2009). Moreover, the existing contributions do not typically consider the problem of warehouse design in a continuous improvement context. On the contrary, with the enhanced customer demand, for most manufacturing industries it has become increasingly important to continuously monitor and progress the internal logistics. This paper presents a preliminary study for the reengineering of the logistics in a Southern Italy firm producing shoes and accessories based on formal modeling. We address a widely used solution for warehouse material handling, i.e., Automated Storage and Retrieval Systems (AS/RSs) (Dotoli and Fanti 2007). These systems are a combination of automatic material handling and storage/retrieval equipments characterized by high accuracy and speed. In order to reengineer the logistic system, a Unified Modelling Language (UML) (Miles and Hamilton, 2006) model is adopted (Dassisti, 2003).

Michele Dassisti, Mariagrazia Dotoli, Nicola Epicoco, Marco Falagario

Workshop on Enterprise Integration, Interoperability and Networking (EI2N) 2012

EI2N 2012 PC Co-chairs Message

After the successful Sixth edition in 2011, the seventh edition of the Enterprise Integration, Interoperability and Networking workshop (EI2N2012) has been organised as part of the OTM2012 Federated Conferences and is co-sponsored by the

IFAC Technical Committee 5.3

Enterprise Integration and Networking, the

IFIP TC 8 WG 8.1

Design and Evaluation of Information Systems, the

SIG INTEROP Grande-Rgion

on Enterprise Systems Interoperability, the SIG INTEROP-VLab.IT on Enterprise Interoperability and the

French CNRS National Research Group

GDR MACS.

Herv Panetto, Lawrence Whitman, Michele Dassisti, J. Cecil, Jinwoo Park

Enterprise Services and Sustainability

An Ontology-Based Model for SME Network Contracts

Even if collaboration is considered an effective solution to improve business strategies, SMEs often lack common principles and common forms of contractual coordination. Several policies implemented by E.U. have addressed the setup of a comprehensive SME policy framework. However, European institutions seem to have focused more on organizational devices to conduct business activities rather than on contractual forms of coordination. In April 2009, Italy adopted a law in network contract to promote the development of inter-firm cooperation strategies to foster enterprises’ innovation and growth. Even if this law represents a novelty in Europe and may offer new challenges and hints, it still presents some lacks in its formulation. The current research aims at presenting the Italian law for network contract, by highlighting both its potentialities and its defects. A formal model to support the design of a SME network was proposed, by providing both an ontology-based model to help the definition of the contract in a structured way, and a basic workflow to identify the important phases of the network design, i.e., the feasibility study and the negotiation. In this way, the network rules and criteria for controlling the network members’ contributions are defined. Mathematical tools derived from performance optimization were exploited.

Giulia Bruno, Agostino Villa

A Framework for Negotiation-Based Sustainable Interoperability for Space Mission Design

The need to improve the time spent performing space mission feasibility design studies has led the aerospace industry to the adoption of Concurrent Engineering methods. These high-performance concepts parallelise the design tasks, effectively reducing design time, but at the cost of increasing risk and rework. The fragile interoperability in this design environment depends greatly on the seniority of the space domain engineers and their expertise in the space design engineering area. As design studies get more complex, with an increasing number of new domains, systems and applications, terminologies and data dependability, together with growing pressure and need for adaptation, the design interoperability arena becomes extremely hard to manage and control. This paper presents the concept of developing and maintaining strong interoperability nodes between the design domains by providing a framework of cloud-based services dedicated to negotiating and enforcing a sustainable interoperability between high-performance businesses.

Carlos Coutinho, Ricardo Jardim-Goncalves, Adina Cretan

Service-Oriented Approach for Agile Support of Product Design Processes

The need to answer quickly to new market opportunities and the high variability of consumer demands tend industrial companies to review their adopted organisation, so to improve their reactivity and to facilitate the coupling with the business enactment. Therefore, these companies require agility in their information systems to allow business needs scalability and design process flexibility. We propose in this paper, the business activities as a service based on the service paradigm and whereas a design process is made of agile services orchestrations. We discuss the interest to use a service-oriented approach and propose a layered architecture for design process enactment.

Safa Hachani, Hervé Verjus, Lilia Gzara

Enterprise Integration and Economical Crisis for Mass Craftsmanship: A Case Study of an Italian Furniture Company

The paper presents a real industrial case of an Italian furniture company facing the problem of a strong evolution put by the market-scenario during the economical crisis. The new challenges to face come either for earn margins reduction, increase of customization level, swift of demand fluctuation. The aim of the paper is to provide a clear analysis of the existing problems and constraints, in the light of the project to design the transition toward a new-defined “mass craftsmanship” configuration of the company.

Michele Dassisti, Michele De Nicolò

Semantic Issues in Enterprise Engineering

Towards a Benchmark for Ontology Merging

Benchmarking approaches for ontology merging is challenging and has received little attention so far. A key problem is that there is in general no single best solution for a merge task and that merging may either be performed symmetrically or asymmetrically. As a first step to evaluate the quality of ontology merging solutions we propose the use of general metrics such as the relative coverage of the input ontologies, the compactness of the merge result as well as the degree of introduced redundancy. We use these metrics to evaluate three merge approaches for different merge scenarios.

Salvatore Raunich, Erhard Rahm

System Definition of the Business/Enterprise Model

Business and enterprise modeling has gained its momentum. Today, there are various approaches that allow describing an enterprise from different points of view. However, it is not possible to cope with the growing variety of the heterogeneous models without a clear and well-defined approach. This paper proposes the system definition of the business/enterprise models based on the systems analysis and the general system theory. This definition is verified by four examples of the existing business/enterprise models given by different authors and widely used both in academia and industry.

Nataliya Pankratova, Oleksandr Maistrenko, Pavlo Maslianko

Semantic and Structural Annotations for Comprehensive Model Analysis

Nowadays enterprise models exist in a variety of types and are based mostly on graphical modeling languages, prominent examples being UML and BPMN. Model relations oftentimes are not made explicit and are hard to analyze. As a consequence, the information integration and interoperability potential of existing enterprise models cannot be exploited efficiently. This paper presents an approach, where based on annotations the model-contained information is made accessible for further processing. Multiple dimensions of the model (i.e. semantic and structural) are considered, allowing a comprehensive view on the model-contained information. Based on that, inter-model relations can be discovered and analyzed.

Axel Hahn, Sabina El Haoum

On the Data Interoperability Issues in SCOR-Based Supply Chains

Supply Chain Operations Reference (SCOR) is a reference model which can be used to design and implement inter-organizational processes of a supply chain. Its implementation assumes a high level of integration between the supply chain partners which reduces their flexibility. The problem of integration requirements may be addressed by enabling the supply chain partners to use their enterprise information systems (instead of specialized software tools) in the implementation and facilitation of SCOR processes. The performance of these processes can be significantly improved if the enterprise information systems of the supply chain actors are interoperable. In this paper, we are using semantic SCOR models to highlight data interoperability requirements for cross-enterprise SCOR processes and to make this data explicit, by relating it to the corresponding domain ontology concepts.

Milan Zdravković, Miroslav Trajanović

Workshop on Industrial and Business Applications of Semantic Web Technologies (INBAST) 2012

INBAST 2012 PC Co-chairs Message

The Semantic Web was planned as a web of data that enables machines to understand the meaning of information on the WWW. Many of the Semantic Web technologies proposed by the W3C already exist and are used in various contexts where sharing data is a common necessity, such as scientific research or data exchange among businesses. However, the Semantic Web as originally envisioned, a system that enables machines to understand and respond to complex human requests based on their meaning, has remained largely unrealized and its critics have questioned its feasibility. Semantic Web technologies have found a greater degree of practical adoption among specialized communities and organizations for intra-company projects. The practical constraints toward adoption have appeared less challenging where domain and scope is more limited than that of the general public and the WWW.

Rafael Valencia Garca, Thomas Moser, Ricardo Colomo Palacios

An Ontology Evolution-Based Framework for Semantic Information Retrieval

Ontologies evolve continuously during their life cycle to adapt to new requirements and necessities. Ontology-based information retrieval systems use semantic annotations that are also regularly updated to reflect new points of view. In order to provide a general solution and to minimize the users’ effort in the ontology enriching process, a methodology for extracting terms and evolve the domain ontology from Wikipedia is proposed in this work. The framework presented here combines an ontology-based information retrieval system with an ontology evolution approach in such a way that it simplifies the tasks of updating concepts and relations in domain ontologies. This framework has been validated in a scenario where ICT-related cloud services matching the user needs are to be found.

Miguel Ángel Rodríguez-García, Rafael Valencia-García, Francisco García-Sánchez

Applications for Business Process Repositories Based on Semantic Standardization

In recent years, researchers have become increasingly interested in developing frameworks and tools for the generation, customization and utilization of business process model content. One of the enabling central techniques for automated repository standardization is Natural Language Processing (NLP). This work reviews previous works on NLP standardization, and presents a set of derived Business Process Management (BPM) applications. We then discuss how these applications can be extended and improved for better utilization of the process repositories by (1) deploying a larger set of semantic models; and (2) integrating complementing applications.

Maya Lincoln, Avi Wasser

Post-via: After Visit Tourist Services Enabled by Semantics

The Internet has disrupted traditional tourism services. Thus, knowing tourists’ travel experience becomes a privileged tool to enable new business strategies based on the feedback provided by the tourists themselves. Post-Via captures and effectively manages tourists’ feedback and, based on semantic technologies, integrates opinions and services to enhance tourists’ loyalty. In a nutshell, Post-Via tries to unite on one platform the necessary components to perform traditional Customer Relationship Management functions and opinion mining techniques to provide services of direct marketing using web semantic components and recommender systems.

Ricardo Colomo-Palacios, Alejandro Rodríguez-González, Antonio Cabanas-Abascal, Joaquín Fernández-González

Ontology-Based Support for Security Requirements Specification Process

The security requirements specification (SRS) is an integral aspect of the development of secured information systems and entails the formal documentation of the security needs of a system in a correct and consistent way. However, in many cases there is lack of sufficiently experienced security experts or security requirements (SR) engineer within an organization, which limits the quality of SR that are specified. This paper presents an approach that leverages ontologies and requirements boilerplates in order to alleviate the effect of lack of highly experienced personnel for SRS. It also offers a credible starting point for the SRS process. A preliminary evaluation of the tool prototype –

ReqSec tool

- was used to demonstrate the approach and to confirm its usability to support the SRS process. The tool helps to reduce the amount of effort required, stimulate discovery of latent security threats, and enables the specification of good quality SR.

Olawande Daramola, Guttorm Sindre, Thomas Moser

Formalization of Semantic Annotation for Systems Interoperability in a PLM Environment

Nowadays, the need for systems collaboration across enterprises and through different domains has become more and more ubiquitous. Due to the lack of standardized models or architecture, as well as semantic mismatching and inconsistencies, research works on information and model exchange, transformation, discovery and reuse are carried out in recent years. One of the main challenges in these researches is to overcome the semantic gap between enterprise applications along any product lifecycle, involving many distributed and heterogeneous enterprise applications. We propose, in this paper, an approach for semantically annotating different knowledge views (business process models, business rules, conceptual models, and etc.) in the Product Lifecycle Management (PLM) environment. These formal semantic annotations will make explicit the tacit knowledge generally engraved in application models and act as bridges to support all actors along the product lifecycle. A case study based on a specific manufacturing process will be presented for demonstrating how our semantic annotations can be applied in a Business to Manufacturing (B2M) interoperability context.

Yongxin Liao, Mario Lezoche, Eduardo Loures, Hervé Panetto, Nacer Boudjlida

Workshop on Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society (META4eS) 2012

Meta4eS 2012 PC Co-chairs Message

The future eSociety - renamed OnTheMoveSociety in the context of OTM 2012 - is a society created by extensive use of digital technologies at all levels of interaction between its members. It is a society that evolves based on knowledge and that empowers individuals to be active participants of the worlds economy by creating virtual communities that benefit from social inclusion, access to information, enhanced interaction and freedom of expression, among others.

Ioana Ciuciu, Anna Fensel

A Methodological Framework for Ontology and Multilingual Termontological Database Co-evolution

Ontologies and Multilingual Termontology Bases (MTB) are two knowledge artifacts with different characteristics and different purposes. Ontologies are used to formally capture a shared view of the world to solve particular interoperability and reasoning tasks. MTBs are general, contain fewer types of relations and their purposes are to relate several term labels within and across different languages to categories. For regions in which the multilingual aspect is vital, not only does one need an ontology for interoperability, the concepts in that ontology need to be comprehensible for everyone whose native tongue is one of the principal languages of that region. Multilinguality provides also a powerful mechanism to perform ontology mapping, content annotation, multilingual querying, etc. We intend to meet these challenges by linking both methods for constructing ontologies and MTBs, creating a virtuous cycle. In this paper, we present our method and tool for ontology and MTB co-evolution.

Christophe Debruyne, Cristian Vasquez, Koen Kerremans, Andrés Domínguez Burgos

F-SAMS: Reliably Identifying Attributes and Their Identity Providers in a Federation

We describe the Federation Semantic Attribute Mapping System (F-SAMS), a web services based system that automatically collects, in a trustworthy manner, the semantic mappings of Identity Provider (IdP) assigned attributes into a federation agreed set of standard attributes. The collected knowledge may be used by federation service providers (SPs) to support the dynamic management of IdPs and their assigned attributes.

David W. Chadwick, Mark Hibbert

Assessing the User Satisfaction with an Ontology Engineering Tool Based on Social Processes

This study discusses one of the three measures defined within the usability testing, namely the user satisfaction, when evaluating an ontology engineering tool based on social processes. The motivation of our focus lays in the fact that being driven by communities through social interactions, the ontology engineering process depends on what the user does, sees and feels when using the system. The evaluation criteria proposed here are therefore developed by looking at the people involved, the processes and their outcomes, mostly taking into account the user experience, in an approach that goes beyond usability. The paper identifies the problems the users encounter when using the system, both at a technical level and psychometric level. A set of recommendations is proposed in order to overcome these problems and to improve the user experience with the system.

Ioana Ciuciu, Christophe Debruyne

Ontology Supported Policy Modeling in Opinion Mining Process

In e-Society the spreading services offered by Social Web has changed the way of communication and cooperation among citizens, policy-makers, governance bodies and civil society actors. One of the main goals of policymakers is to motivate citizens for participation in policy-making processes. UbiPOL ((Ubiquitous Participation Platform for Policy-making, ICT-2009.7.3(ICT for Governance and Policy Modelling), 2009-2011) aimed to develop a ubiquitous solution, which emphasizes citizens’ participation in policy-making processes (PMPs) regardless of their current location and time. Ontology-based opinion mining component of Ubipol system has a crucial role in citizens’ commitment, because it empowers them to contribute in policy making. This paper presents the ontology-based semi-automatic approach and tool for sentiment analysis in Ubipol system, which include lexicon extraction from a large corpus of documents. Aspect-based opinion summarization of user reviews and its combination with domain ontology development are discussed as well.

Mus’ab Husaini, Andrea Ko, Dilek Tapucu, Yücel Saygın

Asking the Right Question: How to Transform Multilingual Unstructured Data to Query Semantic Databases

Ontology engineers have long tried to develop mechanisms to automatically transform natural language statements into queries knowledge-systems can deal with. This has been an enormous challenge as natural languages are highly ambiguous and contexts for disambiguating are seldom identifiable through simple linguistic patterns. To circumvent these difficulties, developers of knowledge bases have often opted for the use of a restricted vocabulary and syntax. Normal users, nevertheless, prefer to express themselves in their language. Special languages or schemas tend to reflect one language – the developer’s – and make extensibility more difficult. Also multilingual access can be more difficult to handle in that way. In this article we present strategies for transforming queries of natural languages into language-neutral representations that can be more easily transformed into semantic queries. We describe a tool that combines a multilingual database and natural processing modules with a semantic database in order to transform queries in Dutch, French and English into queries from which ambiguity at syntactic and semantic levels have been reduced. We focus on certain aspects of natural language such as negation and collocation preferences to deal with semantics.

Andrés Domínguez Burgos, Koen Kerremans, Rita Temmerman

Semantic Policy-Based Data Management for Energy Efficient Smart Buildings

We describe how the semantics can be applied to the smart buildings, with the goal of making them more energy efficient. Having designed and implemented a semantically enabled smart building system, we discuss and evaluate the typical data management challenges connected with the implementation, extension and (re-)use of such system when employing them in the real buildings. The results demonstrate a clear benefit from semantic technologies for integration, efficient rule application and data processing and reuse purposes, as well as for alignment with external data such as tariffs, weather data, statistical data, data from other similar smart home systems. We outline the typical data management operations needed in the real life smart building system deployment, and discuss their implementation aspects.

Vikash Kumar, Anna Fensel, Goran Lazendic, Ulrich Lehner

Towards Using OWL Integrity Constraints in Ontology Engineering

In the GOPSL ontology engineering methodology, integrity constraints are used to guide communities in constraining their domain knowledge. This paper presents our investigation on OWL integrity constraints and its usage in ontology engineering.

Trung-Kien Tran, Christophe Debruyne

Table4OLD: A Tool of Managing Rules of Open Linked Data of Culture Event and Public Transport in Brussels

This paper records a brief description of an online tool and web service called Table4OLD (decision

table

for

O

pen

L

inked

D

ata, culturebrussels.appspot.com), with which we manage decision rules defined on top of domain ontologies. These decision rules are presented in the form of (semantic) decision tables. In the demonstration, we use a use case in the field of culture event and public transport in Brussels. We intend to show how easy a semantic decision table can be used as a user interface for non-technical people. In the meanwhile, it also gives enough technical transparency and modification possibilities to technicians and amateurs.

Yan Tang Demey

Fact-Based Web Service Ontologies

The service-oriented architecture paradigm has gained attention in the past years, because it promised to lay the foundation for agility, in the sense that it would enable companies to deliver new and more flexible business processes to improve customer satisfaction [1, 2, 3]. In the

service-oriented architecture

(SOA) paradigm, a

service requesting organization

(SRO) basically outsources one or more organizational activities or even complete business processes to one or more

service delivering organizations

(SDOs). The way this is done in a traditional way, is that the SRO ‘outsources’ a given business service to a ‘third-party’ SDO for a relative long period of time (3 months, a year). In an agile environment, the reconfigurable resources might face a life-span of a few days or even a few hours, in principle reconfiguration of business services can take place on a run-time time-scale, in the sense that for each new transaction a possibly different SDO must be configured into the value chain. The application of the service-oriented paradigm, therefore, allows the dynamic composition of business functionality by using the world-wide web [3, 4].

Peter Bollen

Combining Image Similarity Metrics for Semantic Image Annotation

This paper describes automated image annotation as an image retrieval problem, in which the distance metric used to express similarity among images is learnt from available distance metrics on several image descriptors. Rather than describing the problem as an optimization problem, we study it as a regression problem. On a limited dataset of images of buildings taken in the city center of Brussels, we illustrate the superior performance of the combined distance metrics over any of the considered individual distance metrics in automated image annotation.

Bart Jansen, Tran Duc Toan, Frederik Temmermans

Ranking and Clustering Techniques to Support an Efficient E-Democracy

We focus on ranking and data mining techniques to empower e-Democracy and allow the opinion of ordinary people to be considered in the design of electoral campaigns. We illustrate the quality of our approach on Venezuelan historical electoral data; ranking results are compared to ground truths produced by an independent study. Our evaluation suggests that the proposed techniques are able to identify up to 85% of the golden results by just analyzing 35% of the whole data.

Marlene Goncalves, Maria-Esther Vidal, Francisco Castro, Luis Vidal, Maribel Acosta

Demo Paper: A Tool for Hybrid Ontology Engineering

We demonstrate a collaborative knowledge management platform in which communities representing autonomously developed information systems build ontologies to achieve semantic interoperability between those systems. The tool is called GOSPL, which stands for Grounding Ontologies with Social Processes and natural Language, and supports the method bearing the same name. Ontologies in GOSPL are hybrid, meaning that concepts are both described informally in natural language and formally. Agreements on these two levels are made simultaneously and the social interaction between and across communities drive the ontology evolution process.

Christophe Debruyne

Get Knowledge from the Knowledge

The Information Society is on its way and not only by or through the Broadcast Industry! Publishing rich media means moving from a data management system controlled by humans to a collaborative production process where machines manage migration, exchange and archiving. So it is necessary to manage the data, applications for data representation and especially the knowledge base that provides the link between the data and its meaning. Worldwide, there are standardized languages (W3C) which provide a scientific basis for this important technological move!

Semantic technologies are made to ensure the enhancement of data’s, manage the sustainability of the link between the data and information (a meta/data), allowing the exchange of rich models between systems!

Steny Solitude

remix: A Semantic Mashup Application

With today’s public data sets containing billions of data items, more and more companies are looking to integrate external data with their traditional enterprise data to improve business intelligence analysis. These distributed data sources however exhibit heterogeneous data formats and terminologies and may require helping the user merging data coming from heterogeneous sources.

remix

is a Business Intelligence (BI) solution that offers business users a productive environment to easily create highly formatted reports they would ultimately like to see. Via rich visual context-aware interactions, users can quickly combine shared data sets and reuse report parts. This enhanced collaboration combined with the provision of a multi-source semantic layer give users the power to make more effective and informed decisions on virtually any relevant data source or BI resource wherever they are.

Magali Seguran, Aline Senart, David Trastour

Workshop on Fact-Oriented Modeling (ORM) 2012

ORM 2012 PC Co-chairs Message

Following successful workshops in Cyprus (2005), France (2006), Portugal (2007), Mexico (2008), Portugal (2009), and Greece (2010 and 2011), this is the eighth fact-oriented modeling workshop run in conjunction with the OTM conferences. Fact-oriented modeling is a conceptual approach for modeling and querying the semantics of business domains in terms of the underlying facts of interest, where all facts and rules may be verbalized in language readily understandable by users in those domains.

Terry Halpin, Herman Balsters

Applying Some Fact Oriented Modelling Principles to Business Process Modelling

In the context of a business process modelling task within a government department, an adapted version of the first two steps of fact oriented modelling has been proposed as an alternative strategy in the initial stage of business processes knowledge elicitation activities. As expertise and knowledge on organisational processes and procedures are in many cases implicit and embodied by support staff – rather than by highly skilled knowledge workers – it is extremely important to adopt an more accessible method to facilitate the elicitation and validation steps. This paper presents how a small scale experiment has been set up, its results and lessons learnt. Even if a thorough evaluation was out of scope, the experiment sufficiently demonstrated the strength of the analysis by natural language as included in the fact oriented modelling methodology.

Peter Spyns

The Interplay of Mandatory Role and Set-Comparison Constraints

In this paper we will focus on the interplay of mandatory role and set-comparison (equality-, subset- and exclusion-) constraints in fact based modeling. We will present an algorithm that can be used to derive mandatory role constraints in combination with non-implied set-comparison constraints as a result of the acceptance or rejection of real-life user examples by the domain expert.

Peter Bollen

CCL: A Lightweight ORM Embedding in Clean

Agile software development advocates a rapid iterative process where working systems are delivered at each iteration. For information systems, this drive to produce something working soon, makes it tempting to skip conceptual domain modeling. The long term benefits of developing an explicit conceptual model are traded for the short term benefit of reduced overhead. A possible way to reconcile conceptual modeling with a code-centric agile process is by embedding it in a programming language. We investigate this approach with CCL, a compact textual notation for embedding Object-Role Models in the functional language Clean. CCL enables specification of Clean types as derivatives of conceptual types. Together with its compact notation, this means that defining data types with CCL as intermediary requires no more programming effort than defining data types directly. Moreover, because embedded ORM is still ORM, mappings to other ORM representations remain possible at any time.

Bas Lijnse, Patrick van Bommel, Rinus Plasmeijer

Formalization of ORM Revisited

Fact-oriented modeling approaches such as Object-Role Modeling (ORM) and Natural Language Information Analysis Method (NIAM) enable conceptual information models to be expressed using graphical diagrams that may be assigned formal semantics by mapping them onto sets of logical formulae. Various formalizations for such mappings exist. This paper extends such previous work by providing a new approach to formalizing second generation ORM (ORM 2). We show that the metalevel association between semantic value type and data type must be a mapping relationship rather than a subtyping relationship, and we axiomatize a special representation relationship to support this mapping at the instance level. Our new formalization includes coverage of preferred reference schemes and additional constraints introduced in ORM 2. Other issues examined briefly include the use of finite model theory, sorted logic, and practical choices for implementing certain kinds of logical formulae as constraints or derivation rules.

Terry Halpin

ORM Logic-Based English (OLE) and the ORM ReDesigner Tool: Fact-Based Reengineering and Migration of Relational Databases

The problem of database reengineering stems from (legacy) databases that are hard to understand (incorrect-, incomplete- or missing semantics) or that perform inefficiently. Reengineering is often cumbersome due to lack of semantics of the original database, and often the data migration target is also unclear. This paper addresses those two problems. We shall show how fact-based modeling, in particular ORM and its representation in (sugared) Sorted Logic, can help in reengineering (relational) databases. We reconstruct the semantics of the source database by offering a set of natural-language sentences capturing conceptual structure and constraints of the source. These sentences are written in a structured natural language format, coined as OLE:

ORM Logic-based English

. OLE is then used to define the mappings from the original source to a reengineered and restructured target database. We shall also discuss the ORMReDesigner: a semi-automatic tool, based on OLE and NORMA, available as a research prototype, used for reengineering and migrating relational databases.

Herman Balsters

ORM2: Formalisation and Encoding in OWL2

This paper introduces

ORM

2

plus

– a new linear syntax and complete semantics expressed in first order logic of

ORM2

– which can be shown correctly embedding the original proposal. A provably correct encoding of the core fragment

ORM

2

zero

in the

$\mathcal{ALCQI}$

description logic (a fragment of

OWL2

with qualified cardinality restrictions and inverse roles) is presented. Complexity of reasoning on

ORM2

conceptual schemas, and the

ExpTime

-membership of reasoning on

ORM

2

zero

, are also shown. On the basis of these results, a systematic critique of alternative approaches to the formalisation of

ORM2

in (description) logics published so far is provided. A prototype has been implemented providing a backend for the automated support of implicit constraints deduction, schema consistency checks, and user-defined constraints entailment, for

ORM

2

zero

conceptual schemas along with its translation into

$\mathcal{ALCQI}$

knowledge bases.

Enrico Franconi, Alessandro Mosca, Dmitry Solomakhin

Fact-Based Specification of a Data Modeling Kernel of the UML Superstructure

Data schemas are an important part of the software design process. The Unified Modeling Language (UML) is the lingua franca in current software engineering practice, and UML class diagrams are used for data modeling within software-engineering projects. Fact-based modeling (FBM) has many advantages over UML, for data modeling. Database engineers that have specified their data schemas in FBM, are often faced with difficulties in communicating these schemas to software engineers using UML. We wish to tackle this communication problem by eventually offering a translation from the FBM-specifications to UML class diagrams. Such a translation requires a formal meta-model description of both FBM and a data-modeling kernel of UML. This paper describes an FBM-based specification of a data-modeling kernel of the UML Superstructure. This kernel will be fact-based, with the added advantage of enabling validation of this FBM-specification.

Joost Doesburg, Herman Balsters

Exploring the Benefits of Collaboration between Business Process Models and Fact Based Modeling - ORM

Companies are continually striving to reduce issues leading to failed projects and the costs associated with them. The use and collaboration of business process mapping and ORM data modeling can help improve the success rate of a project. This project incorporated lean business practice deliverables such as business process maps along with ORM data models. The output from the two sessions revealed a gap in knowledge around business rules which led to the creation of decision tables. The knowledge gained from the three components (process maps, ORM models and business rules) had numerous benefits including cross-validation of models, increased user involvement, definition of user acceptance tests, and ease in eliciting requirements. The collaboration of business process and data modeling drives accuracy and efficiency in requirements and can reduce or eliminate many of the issues identified as causes of failed projects.

Necito dela Cruz, Connie Holker, Miguel Tello

Enhanced Verbalization of ORM Models

Fact-oriented modeling approaches such as Object-Role Modeling (ORM) validate their models with domain experts by verbalizing the models in natural language, and by populating the relevant fact types with concrete examples. This paper extends previous work on verbalization of ORM models in a number of ways. Firstly, it considers some ways to better ensure that generated verbalizations are unambiguous, including occasional use of lengthier verbalizations that are tied more closely to the underlying logical form. Secondly, it provides improved verbalization patterns for common types of ORM constraints, such as uniqueness and mandatory role constraints. Thirdly, it provides an algorithm for verbalizing external uniqueness and frequency constraints over roles projected from join paths of arbitrary complexity. The paper also includes some discussion of how such verbalization enhancements were recently implemented in the Natural ORM Architect (NORMA) tool.

Matthew Curland, Terry Halpin

A Validatable Legacy Database Migration Using ORM

This paper describes a method used in a real-life case of a legacy database migration. The difficulty of the case lies in the fact that the legacy application to be replaced has to remain fully available during the migration process while at the same time data from the old system is to be integrated within the new system. The target database schema was fixed beforehand, hence complicating and limiting our choices in constructing a possible target schema. The conceptual approach of the Object-Role Modeling (ORM) method helped us to better understand the semantics of the source and target system and enabled us to abstract from implementation choices in both the source and the target schemas. We discuss how our method could help in executing other legacy data migration projects.

Tjeerd H. Moes, Jan Pieter Wijbenga, Herman Balsters, George B. Huitema

Workshop on Ontology Content (OnToContent) 2012

OnToContent 2012 PC Co-chairs Message

Semantics play an increasingly crucial role in large and complex networked information systems. Ontologies and semantic data all represent information sources valuable to end users and are fundamental resources that support a variety of applications in several domains, e.g., data integration, document management, information retrieval, web engineering, and so on. For this reason Onto- Content 2012 focuses on issues related to the creation and evaluation of content for ontologies and semantic data.

Mustafa Jarrar, Amanda Hicks, Matteo Palmonari

Towards an Ontology of Document Acts: Introducing a Document Act Template for Healthcare

Background: In current information systems the pervasive role of documents and their ability of creating new entities are often overlooked. Regularly, documents are stored as mere files without analysis of their deontic powers. In order to make intelligent management of documents a real possibility, we propose an ontological representation of document acts. Objectives: This article summarizes first steps towards a sound ontological representation of documents in healthcare organizations by providing a template structure for documents acts. Methods: We rely on the theory of document acts to develop such a template for defining pragmatic aspects of documents and to provide examples of the application in healthcare procedures. Furthermore, we show how this research contributes for the development of an OWL representation of document acts. Results: We provide a template for document acts and show its usage in clinical guidelines. Conclusion: While the definition of pragmatic aspects contributes to a clearer representation of documents acts in the healthcare domain, further development needs to be carried out regarding representation of document acts in ontologies.

Mauricio B. Almeida, Laura Slaughter, Mathias Brochhausen

An Architecture for Data and Knowledge Acquisition for the Semantic Web: The AGROVOC Use Case

We are surrounded by ever growing volumes of unstructured and weakly-structured information, and for a human being, domain expert or not, it is nearly impossible to read, understand and categorize such information in a fair amount of time. Moreover, different user categories have different expectations: final users need easy-to-use tools and services for specific tasks, knowledge engineers require robust tools for knowledge acquisition, knowledge categorization and semantic resources development, while semantic applications developers demand for flexible frameworks for fast and easy, standardized development of complex applications. This work represents an experience report on the use of the CODA framework for rapid prototyping and deployment of knowledge acquisition systems for RDF. The system integrates independent NLP tools and custom libraries complying with UIMA standards. For our experiment a document set has been processed to populate the AGROVOC thesaurus with two new relationships.

Maria Teresa Pazienza, Armando Stellato, Alexandra Gabriela Tudorache, Andrea Turbati, Flaminia Vagnoni

Ontology Learning from Open Linked Data and Web Snippets

The Web of Open Linked Data (OLD) is a recommended best practice for exposing, sharing, and connecting pieces of data, information, and knowledge on the Semantic Web using URIs and RDF. Such data can be used as a training source for ontology learning from web textual contents in order to bridge the gap between structured data and the Web. In this paper, we propose a new method of ontology learning that consists in learning linguistic patterns related to OLD entities attributes from web snippets. Our insight is to use the Linked Data as a skeleton for ontology construction and for pattern learning from texts. The contribution resides on learning patterns for relations existing in the Web of Linked Data from Web content. These patterns are used to populate the ontology core schema with new entities and attributes values. The experiments of the proposal have shown promising results in precision.

Ilaria Tiddi, Nesrine Ben Mustapha, Yves Vanrompay, Marie-Aude Aufaure

An Ontology for the Identification of the most Appropriate Risk Management Methodology

Methods and technologies for risk management have been developed and consolidated over time in different sectors and to meet various needs. Recently, ISO organization published a set of documents for risk assessment. These guidelines are not specific to a particular sector, but can be undertaken by any public or private organization and can be applied to any type of risk. This paper presents a research work that aims to realize a knowledge-base for the development of a tool to support the identification of the most appropriate risk management methodology according to the specific characteristics of an organization.

Silvia Ansaldi, Marina Monti, Patrizia Agnello, Franca Giannini

SAM: A Tool for the Semi-Automatic Mapping and Enrichment of Ontologies

Ontologies are fundamental tools used with different purposes and with different modalities in different areas and communities. To guarantee the right level of quality, the most widely used ontologies are man-made. However, developing and maintaining them turns out to be extremely time-consuming. For this reason, there are approaches aiming at their automatic construction where ontologies are incrementally extended by extracting and integrating knowledge from existing sources. However, these approaches tend to reach an accuracy that, according to the application they need to serve, cannot be always considered satisfactory. Therefore, when a higher accuracy is necessary, manual or semi-automatic approaches are still preferable. In this paper we present a technique and a corresponding tool, that we called SAM (semi-automatic mapper), for the semi-automatic enrichment of an ontology through the mapping of an external source to the target ontology. As proved by our evaluation, the tool allows saving around 50% of the time required by purely manual approaches.

Vincenzo Maltese, Bayzid Ashik Hossain

Mining Digital Library Evaluation Patterns Using a Domain Ontology

Scientific literature is vast and therefore the researchers need knowledge organization systems to index, semantically annotate and correlate their bibliographic sources. Additionally they need methods and tools to discover scientific trends and commonly acceptable practices or areas for further investigation. This paper proposes a clustering-based data mining process to identify research patterns in the digital libraries evaluation domain. The papers published in the proceedings of a well known international conference in the decade 2001-2010 were semantically annotated using the Digital Library Evaluation Ontology (DiLEO). The generated annotations were clustered to portray common evaluation practices. The findings highlight the expressive nature of DiLEO and underline the potential of clustering in the research activities profiling.

Angelos Mitrelis, Leonidas Papachristopoulos, Michalis Sfakakis, Giannis Tsakonas, Christos Papatheodorou

A Negotiation Approach to Support Conceptual Agreements for Ontology Content

Conceptualisation processes are pervasive to most technical and professional activities, but are seldom addressed explicitly due to the lack of theoretical and practical methods and tools. However, it seems not to be a popular research topic in knowledge representation or its sub-areas such as ontology engineering. The approach described in this paper is a contribution to the development of methods and tools to collaborative conceptualisation processes. The particularly challenging problem of conceptual negotiation is here tackled through a combination of ColBlend method and an argumentation-based strategy, creating an innovative method to conceptual negotiation,

argile

method. This method was implemented in to the ConceptME platform as an advanced negotiation mechanism.

Carla Pereira, Cristóvão Sousa, António Lucas Soares

Workshop on Semantics and Decision Making (SeDeS) 2012

SeDeS 2012 PC Co-chairs Message

Decision support has gradually evolved in both the fields of theoretical decision support studies and practical assisting tools for decision makers since the 1960s. Ontology Engineering (OE) brings new synergy to decision support. On the one hand, it will change (and actually now is changing) the decision support landscape, as it will enable new breeds of decision models, decision support systems (DSS) to be developed. On the other hand, DSS can bring theories and applications that support OE, such as ontology integration, ontology matching and ontology integration.

Yan Tang Demey, Jan Vanthienen

Nondeterministic Decision Rules in Classification Process

In the paper, we discuss nondeterministic rules in decision tables, called the truncated nondeterministic rules. These rules have on the right hand side a few decisions. We show that the truncated nondeterministic rules can be used for improving the quality of classification.

We propose a greedy algorithm of polynomial time complexity to construct these rules. We use this type of rules, to build up rule-based classifiers. These classifiers, classification algorithms, are used not only nondeterministic rules but also minimal rules in the sense of rough sets. These rule-based classifiers were tested on the group of decision tables from the UCI Machine Learning Repository. The reported results of the experiment show that the proposed classifiers based on nondeterministic rules improve the classification quality but it requires tuning some of their parameters relative to analyzed data.

Piotr Paszek, Barbara Marszał-Paszek

A Novel Context Ontology to Facilitate Interoperation of Semantic Services in Environments with Wearable Devices

The LifeWear-Mobilized Lifestyle with Wearables (Lifewear) project attempts to create Ambient Intelligence (AmI) ecosystems by composing personalized services based on the user information, environmental conditions and reasoning outputs. Two of the most important benefits over traditional environments are 1) take advantage of wearable devices to get user information in a non-intrusive way and 2) integrate this information with other intelligent services and environmental sensors. This paper proposes a new ontology composed by the integration of users and services information, for semantically representing this information. Using an Enterprise Service Bus, this ontology is integrated in a semantic middleware to provide context-aware personalized and semantically annotated services, with discovery, composition and orchestration tasks. We show how these services support a real scenario proposed in the Lifewear project.

Gregorio Rubio, Estefanía Serral, Pedro Castillejo, José Fernán Martínez

Ontology Based Method for Supporting Business Process Modeling Decisions

This work suggests a method for machine-assisted support of business process modeling decisions, based on business logic that is extracted from process repositories using a linguistic analysis of the relationships between constructs of process descriptors. The analysis enables the setup of a descriptor space, in which it is possible to analyze business rules and logic. The suggested method aims to assist process designers in modeling decision making, based on knowledge that is encapsulated within existing business process repositories. The method is demonstrated using a real-life process repository from the higher-education industry.

Avi Wasser, Maya Lincoln

Sensor Information Representation for the Internet of Things

Internet of Things integrates wireless sensor networks into the Internet, and paves the way to help people live seamlessly in both physical and cyber worlds. In a synergizing Internet of Things application, various types of sensors will communicate and exchange information for achieving user tasks across heterogeneous wireless sensor networks. This application needs an information representation for uniformly developing and managing sensor data. The information representation thus accelerates data integration and increases Internet of Things application interoperability. In this paper, we present an overview on existing technical solutions for representing sensor information.

Jiehan Zhou, Teemu Leppänen, Meirong Liu, Erkki Harjula, Timo Ojala, Mika Ylianttila, Chen Yu

Modeling Decision Structures and Dependencies

Decisions are often not adequately modeled. They are hardcoded in process models or other representations, but not always modeled in a systematic way. Because of this hardcoding or inclusion in other models, organizations often lack the necessary flexibility, maintainability and traceability in their business operations.

The aim of this paper is to propose a decision structuring methodology, named Decision Dependency Design (D3). This methodology addresses the above problems by structuring decisions and indicating underlying dependencies. From the decision structure, different execution mechanisms could be designed.

Feng Wu, Laura Priscilla, Mingji Gao, Filip Caron, Willem De Roover, Jan Vanthienen

Knowledge Mining Approach for Optimization of Inference Processes in Rule Knowledge Bases

The main aim of the article is to present modifications of inference algorithms based on information extracted from large sets of rules. The conception of cluster analysis and decision units will be used for discovering knowledge from such data.

Agnieszka Nowak-Brzezińska, Roman Simiński

On Semantics in Onto-DIY

This paper illustrates how semantics is modeled and used in the flexible and idea inspiring ontology-based Do-It-Yourself architecture (Onto-DIY). The semantics part contains three divisions - 1) semantics from the domain ontologies, which are used as the semantically rich knowledge for describing Internet of Things (IoT) components, 2) semantics stored in semantic decision tables, which contain decision rules on top of domain ontologies, and, 3) semantics from user-centric services, which define the software composition for the deployment.

Yan Tang Demey, Zhenzhen Zhao

Workshop on Socially Intelligent Computing (SINCOM) 2012

SINCOM 2012 PC Co-chairs Message

Welcome to the 1st International Workshop on Socially Intelligent Computing (SINCOM2012). SINCOMfocuses on the study, design, development and evaluation of emergent Socially Intelligent Computing systems. The workshop addresses all aspects of socially intelligent computing, that span a variety of issues ranging from the modeling of social behavior in social computational systems, the architectures and design issues of such systems, social media analytics and monitoring, stream processing of social data and social activities, semantic web and Linked Data approaches, the discovery, collection, and extraction of social networks, as well as information retrieval and machine learning methods for socially intelligent computing.

Gregoris Mentzas, Wolgang Prinz

A Simulation Framework for Socially Enhanced Applications

The emergence of complex computational problems, which cannot be solved solely by software and require human contribution, have brought the need of designing systems that efficiently manage a mix of software-based and human-based computational resources. Simulations are very appropriate for research on these systems, because they enable testing different scenarios, while avoiding the cost and time limitations of research in real systems. In this paper, we present a framework that enables the simulation of socially-enhanced applications utilizing both software and human based resources. Our framework addresses challenges in human-software environments, such as identifying relevant comparable metrics for mixed resources, mixed-resource selection, composition and scheduling algorithms, performance monitoring and analysis etc. We show the framework’s usefulness and usability through a use case.

Mirela Riveni, Hong-Linh Truong, Schahram Dustdar

Analyzing Tie-Strength across Different Media

Human interactions are becoming more and more important in computer systems. The interactions can be classified according to predefined rules, based on the assumption that relations between humans differ greatly. The tie-strength notion is used throughout the social sciences literature in order to denote this classification. The present paper researches how the tie-strength between two persons can be computed automatically by applying structural data from different sources, e.g. email, shared workspaces. This data can provide a virtual copy of the ego-centric network of a user and therefore be utilized for social intelligent computing.

Nils Jeners, Petru Nicolaescu, Wolfgang Prinz

Internet of Future Enabling Social Network of Things

Advances in the areas of embedded systems, computing, and networking are leading to an infrastructure composed of millions of heterogeneous devices and services. These devices will not simply convey information but process it in transit, connect peer to peer, and form advanced collaborations. This “Internet of Things” infrastructure must have the interoperability of solution at the communication level, as well as at the service level, has to be ensured across various platforms [1].In this paper we investigate on the potential of combining social and technical networks to collaboratively provide services to both human users and technical systems supported by the Internet of Future (IoF). In the Internet of Things (IoT), things talk and exchange information to realize the vision of future pervasive computing environments. The common physical and social space emerges by the objects’ ability to interconnect, not only amongst themselves, but also with the human beings living and working in them. This paper intends to present an architecture and its application.

Moacyr Martucci Junior, Erica de Paula Dutra

SocIoS: A Social Media Application Ontology

The value that social web is adding is undeniable. However, social web is coming at a cost: the so-called "closed" web. Slowly but steadily, a growing portion of internet activity is confined within the spaces of social networking sites (SNS) or platforms that encompass social networking capabilities by default. Furthermore, due to their competitive stance between one another, conceptually common content and functionality are accessed through different mechanisms in SNSs. This work deals with the issue of semantic equivalence between the prevalent notions used in SNSs in order to device a common object model that will enable the integration of social platform APIs. The result is the proposal for an ontology for the implementation of cross-platform social applications. Two particular applications are considered as a guide for this reference model.

Konstantinos Tserpes, George Papadakis, Magdalini Kardara, Athanasios Papaoikonomou, Fotis Aisopos, Emmanuel Sardis, Theodora Varvarigou

A Social P2P Approach for Personal Knowledge Management in the Cloud

The massive access to the Cloud poses new challenges for the cloud citizens to use services overcoming the risks of a new form of digital divide. What we envision is a socially orchestrated cloud, i.e. an ecosystem where the users put their experience and know-how (their personal knowledge) at the service of the social cloud, where social networks articulate the assistance to users. With his scenario in mind, this paper introduces a peer-to-peer solution to the problem of sharing personal knowledge for social search. We propose a routing mechanism in a peers’ network, which uses both information about the social relation among peers and their personal knowledge. Finally, we deploy the proposal over Facebook as a support network.

Rebeca P. Díaz Redondo, Ana Fernández Vilas, Jose J. Pazos Arias, Sandra Servia Rodríguez

Workshop on SOcial and MObile COmputing for Collaborative Environments (SOMOCO) 2012

SOMOCO 2012 PC Co-chairs Message

The rapid progress of the Internet as new platform for social and collaborative interactions and the widespread usage of social online applications in mobile contexts have led the research areas of social and mobile computing to receive an increasing interest from academic and research institutions as well as from private and public companies.

Fernando Ferri, Patrizia Grifoni, Arianna D’Ulizia, Maria Chiara Caschera, Irina Kondratova

An SNA-Based Evaluation Framework for Virtual Teams

Organizations are increasingly aware of the underlying forces of social networks and their impact on information and knowledge dissemination within virtual teams. However, assessing these networks is a challenge for team managers who need more complete toolkits in order to master team metrics. Social Network Analysis (SNA) is a descriptive, empirical research method for mapping and measuring relationships and flows between people, groups, organizations and other connected information/knowledge entities. In this article we establish a framework based on SNA to evaluate virtual teams.

Lamia Ben Hiba, Mohammed Abdou Janati Idrissi

Towards Evolutionary Multimodal Interaction

One of the main challenges of Human Computer Interaction researches is to improve naturalness of the user’s interaction process. Currently two widely investigated directions are the adaptivity and the multimodality of interaction. Starting from the adaptivity concept, the paper provides an analysis of methods that make multimodal interaction adaptive respect to the final users and evolutionary over time. A comparative analysis between the concepts of adaptivity and evolution, given in literature, is provided, highlighting their similarities and differences and an original definition of evolutionary multimodal interaction is provided. Moreover, artificial intelligence techniques, quantum computing concepts and evolutionary computation applied to multimodal interaction are discussed.

Maria Chiara Caschera, Arianna D’Ulizia, Fernando Ferri, Patrizia Grifoni

Interactive Public Digital Displays: Investigating Its Use in a High School Context

This paper presents a longitudinal user study that investigated the adoption of some Bluetooth based functionalities for a public digital display in a high school. More specifically, the utilization of Bluetooth device naming extended beyond social identity representation and introduced the use of a simple interaction mechanism. The interaction mechanism involves recognizing parts of the Bluetooth device name as explicit instructions to trigger the generation of content on an interactive public display. Together with representatives of the teachers’ community, the design team defined some social rules concerning usage in order to account for the specificities of the place. In the user study, three fully functional prototypes were deployed at the school hall of the high school. The functionalities introduced with the different prototypes were: the visualization on the display of the Bluetooth device names, the possibility to contribute to tag clouds and the possibility to choose icons from a given set for self-expression. The results suggest that people appropriated some but not all of the functionalities employed. Implications of our findings to the design of interactive digital displays are pointed out.

Nuno Otero, Rui José, Bruno Silva

Evolution of Marketing Strategies: From Internet Marketing to M-Marketing

The paper describes the evolution of marketing strategies from the advent of the Web (Internet Marketing) through the advent of Social Networks (Marketing 2.0) to the evolution of Mobile Social Networks (M-marketing). Moreover the paper analyses the use that Italian people make of mobile devices and the user perception and acceptance of M-marketing on considering the characteristics that influence them. Finally a short discussion on viral marketing trend is given.

Tiziana Guzzo, Alessia D’Andrea, Fernando Ferri, Patrizia Grifoni

Toward a Social Graph Recommendation Algorithm: Do We Trust Our Friends in Movie Recommendations?

Social networks provide users with information about their friends, their activities, and their preferences. In this paper we study the effectiveness of movie recommendations computed from such communicated preferences. We present a set of social movie recommendation algorithms, which we implemented on top of the Facebook social network, and we compare their effectiveness in influencing user decisions. We also study the effect of showing users a justification for the recommendations, in the form of the profile pictures of the friends that caused the recommendation.

We show that social movie recommendations are generally accurate. Furthermore, 80% of the users that are undecided on whether to accept a recommendation are able to reach a decision upon learning of the identities of the users behind the recommendation. However, in 27% of the cases, they decide

against

watching the recommended movies, showing that revealing identities can have a negative effect on recommendation acceptance.

Ali Adabi, Luca de Alfaro

Cooperative Information Systems (CoopIS) 2012 Posters

CoopIS 2012 PC Co-chairs Message

Welcome to the proceedings of CoopIS 2012. It was the 20th conference in the CoopIS conference series and took place in Rome, Italy, in September 2012. This conference series has established itself as a major international forum for exchanging ideas and results on scientific research for practitioners in fields such as computer supported cooperative work (CSCW), middleware, Internet & Web data management, electronic commerce, business process management, agent technologies, and software architectures, to name a few. In addition, the 2012 edition of CoopIS aims at highlighting the increasing need for data- and knowledgeintensive processes. As in previous years, CoopIS’12 was again part of a joint event with other conferences, in the context of the OTM (”OnTheMove”) federated conferences, covering different aspects of distributed information systems.

Stefanie Rinderle-Ma, Xiaofang Zhou, Peter Dadam

Operational Semantics of Aspects in Business Process Management

Aspect orientation is an important approach to address complexity of cross-cutting concerns in Information Systems. This approach encapsulates these concerns separately and compose them to the main module when needed. Although there are different works which shows how this separation should be performed in process models, the composition of them is an open area. In this paper, we demonstrate the semantics of a service which enables this composition. The result can also be used as a blueprint to implement the service to support aspect orientation in Business Process Management area.

Amin Jalali, Petia Wohed, Chun Ouyang

Multi-objective Resources Allocation Approaches for Workflow Applications in Cloud Environments

Resources allocation and scheduling has been recognised as an important topic for business process execution. However, despite the proven benefits of using Cloud to run business process, users lack guidance for choosing between multiple offering while taking into account several objectives which are often conflicting. Moreover, when running business processes it is difficult to automate all tasks. In this paper, we propose three complementary approaches for Cloud computing platform taking into account these specifications.

Kahina Bessai, Samir Youcef, Ammar Oulamara, Claude Godart, Selmin Nurcan

SeGA: A Mediator for Artifact-Centric Business Processes

Business processes (BPs) can be designed using a variety of modeling languages and executed in different systems. In most BPM applications, the semantics of BPs needed for runtime management is often

scattered

across BP models, execution engines, and auxiliary stores of workflow systems. The inability to capture such semantics in BP models is the root cause for many BPM challenges. In this paper, an automated tool SeGA for wrapping BPs is developed. We demonstrate that SeGA provides a simple yet general framework for runtime querying and monitoring BP executions cross different BP management systems.

Yutian Sun, Wei Xu, Jianwen Su, Jian Yang

An Approach to Recommend Resources for Business Processes

Workflow management is an important technology of business process management that links tasks and qualified resources as a bridge. Researches have been carried out to improve the resource allocation of workflow that is often performed manually and empirically either by mining resource allocation rules or by optimizing the resource allocation for tasks to achieve certain goals such as minimal cost or duration. None of these approaches can guarantee to give the suitable solution to resource allocators because of the dynamic natures of business process executions. In this paper we propose an approach,

BNRR

(Bayesian Network-based Resource Recommendation), to recommend the most proficient sets of resources for a business process based on event logs, which gives the allocators chances to find the most suitable solution. Our approach considers both the information about the resource dependency and the information about the resource capability. The approach can be applied to recommend resources either for a whole workflow or for an individual task. The approach is validated by experiments on real life data.

Hedong Yang, Lijie Wen, Yingbo Liu, Jianmin Wang

Ontologies, DataBases, and Applications of Semantics (ODBASE) 2012 Posters

ODBASE 2012 PC Co-chairs Message

We are happy to present the papers of the 11th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE) held in Rome (Italy) on September 11th and 12th, 2012. The ODBASE Conference series provides a forum for research on the use of ontologies and data semantics in novel applications, and continues to draw a highly diverse body of researchers and practitioners. ODBASE is part of On the Move to Meaningful Internet Systems (OnTheMove) that co-locates three conferences: ODBASE, DOA-SVI (International Symposium on Secure Virtual Infrastructures), and CoopIS (International Conference on Cooperative Information Systems). Of particular interest in the 2012 edition of the ODBASE Conference are the research and practical experience papers that bridge across traditional boundaries between disciplines such as databases, networking, mobile systems, artificial intelligence, information retrieval, and computational linguistics. In this edition, we received 52 paper submissions and had a program committee of 82 people, which included researchers and practitioners from diverse research areas. Special arrangements were made during the review process to ensure that each paper was reviewed by members of different research areas. The result of this effort is the selection of high quality papers: fifteen as regular papers (29%), six as short papers (12%), and three as posters (15%). Their themes included studies and solutions to a number of modern challenges such as search and management of linked data and RDF documents, modeling, management, alignment and storing of ontologies, application of mining techniques, semantics discovery, and data uncertainty management.

Sonia Bergamaschi, Isabel Cruz

Mastro: Ontology-Based Data Access at Work (Extended Abstract)

In this paper we present the current version of

Mastro

, a system for ontologybased data access (OBDA) developed at Sapienza Università di Roma.

Mastro

allows users for accessing external data sources by querying an ontology expressed in a fragment of the W3C Web Ontology Language (OWL). As in data integration [5], mappings are used in OBDA to specify the correspondence between a unified view of the domain (called global schema in data integration terminology) and the data stored at the sources. The distinguishing feature of the OBDA approach, however, is the fact that the global schema is specified using an ontology language, which typically allows to provide a rich conceptualization of the domain of interest, independently from the source representation.

Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, Antonella Poggi, Riccardo Rosati, Marco Ruzzi, Domenico Fabio Savo

A XPath Debugger Based on Fuzzy Chance Degrees

We describe how we can manipulate an XPath expression in order to obtain a set of alternative XPath expressions that match to a given XML document. For each alternative XPath expression we will give a chance degree that represents the degree in which the expression deviates from the initial expression. Thus, our work is focused on providing the programmer a repertoire of paths that (s)he can use to retrieve answers. The approach has been implemented and tested.

Jesús M. Almendros-Jiménez, Alejandro Luna, Ginés Moreno

Efficiently Producing the K Nearest Neighbors in the Skyline for Multidimensional Datasets

We propose a hybrid approach that combines Skyline and Top-k solutions, and develop an algorithm named k-NNSkyline. The proposed algorithm exploits properties of monotonic distance metrics, and identifies among the skyline tuples, the

k

ones with the lowest values of the distance metric, i.e., the

k

nearest incomparable neighbors. Empirically, we study the behavior of k-NNSkyline in both synthetic and real-world datasets; our results suggest that k-NNSkyline outperforms existing solutions by up to three orders of magnitude.

Marlene Goncalves, Maria-Esther Vidal

Evaluating Semantic Technology: Towards a Decisional Framework

Time required to access data and query knowledge base is one of the most important parameter in designing information system. When the size increases, the complexity of an ontology makes reasoning and querying processes less efficient and very time consuming. Although performances are crucial, other features, such as the type of license, the availability of a related community support or the ease of adoption of a particular technology are often key elements in a decision-process of an industrial designer. This paper proposes an evaluation of main semantic technologies in terms of different metrics at varying ontology sizes. The evaluation aims at building a concrete framework for supporting industrial designers of ontology-based software systems to make proper decisions, taking into account the scale of their knowledge base and the main metrics.

Roberto Armenise, Daniele Caso, Cosimo Birtolo

About the Performance of Fuzzy Querying Systems

Traditional database systems suffer of rigidity. Use of fuzzy sets has been proposed for solving it. Nevertheless, there is certain resistance to adopt this, due the presumption that it adds undesired costs that worsen the performance and scalability of software systems. RDBMS are rather complex by themselves. Extensions for providing higher facilities, with a permissible performance and good scalability would be appreciated. In this paper, we achieve a formal statistics study of fuzzy querying performance. We considered two querying systems: SQLfi (loose coupling) and PostgreSQLf (tight coupling). Observed times for the later are very reasonable. It shows that it is possible to build high performance fuzzy querying systems.

Ana Aguilera, José Tomás Cadenas, Leonid Tineo

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise