Skip to main content

2015 | Buch

On the Move to Meaningful Internet Systems: OTM 2015 Workshops

Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, EI2N, FBM, INBAST, ISDE, META4eS, and MSC 2015, Rhodes, Greece, October 26-30, 2015. Proceedings

herausgegeben von: Ioana Ciuciu, Hervé Panetto, Christophe Debruyne, Alexis Aubry, Peter Bollen, Rafael Valencia-García, Alok Mishra, Anna Fensel, Fernando Ferri

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume constitutes the refereed proceedings of the following 8 International Workshops: OTM Academy; OTM Industry Case Studies Program; Enterprise Integration, Interoperability, and Networking, EI2N; International Workshop on Fact Based Modeling 2015, FBM; Industrial and Business Applications of Semantic Web Technologies, INBAST; Information Systems, om Distributed Environment, ISDE; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society, META4eS; and Mobile and Social Computing for collaborative interactions, MSC 2015. These workshops were held as associated events at OTM 2015, the federated conferences "On The Move Towards Meaningful Internet Systems and Ubiquitous Computing", in Rhodes, Greece, in October 2015.

The 55 full papers presented together with 3 short papers and 2 popsters were carefully reviewed and selected from a total of 100 submissions. The workshops share the distributed aspects of modern computing systems, they experience the application pull created by the Internet and by the so-called Semantic Web, in particular developments of Big Data, increased importance of security issues, and the globalization of mobile-based technologies.

Inhaltsverzeichnis

Frontmatter

On The Move Academy (OTMA) 2015: The 12th OnTheMove Academy PC Chairs’ Message

Frontmatter
Adaptation Mechanisms for Role-Based Software Systems

Software Systems have become incredibly large and complex, hence, difficult to develop, maintain and evolve. Furthermore, those software systems do not operate in a stable environment. Certain properties of the environment, e.g. available bandwidth, through-put or workload of the hosting machine of the software system, vary over time and might influence the system’s performance adversely under certain conditions. This issue is addressed by self-adaptive software systems, which are systems that can change their own behavior in response to changes in its operational environment [2].

Martin Weißbach
Time Management in Workflows with Loops

The goal of this research is to enable proactive time management for workflows with loops. We want to offer time constraint patterns that allow the formulation of time constraints on activities that are contained in loops.

Furthermore we design an algorithm for timed workflow graph computation considering loops and given time constraints. We use the time constraints to bind unbounded loops such that we iteratively expand the workflow, compute the timed workflow graph and check the satisfiability of the time constraints.

We also deal with a fast recomputation of a timed workflow graph at the runtime, which is needed to care for slack distribution, situation assessment and enactment of escalation strategies.

Margareta Ciglic
Intercloud Communication for Value-Added Smart Home and Smart Grid Services

The increasingly decentralized generation of renewable energy enables value-added smart home and smart grid (SHSG) services. The device data on which those services rely are often stored in clouds of different vendors. Usually, the vendors’ clouds all offer their own service interfaces. It is increasingly challenging for service providers to access the data from all these clouds. Hence, each cloud forms a data silo, where users’ device data are captured. Intercloud computing is one suggested approach to solve this uprising vendor silo problem. Introducing a standardized service interface and simply interconnecting the clouds can easily result in an unnecessary communication overhead. Compared to other domains applying Intercloud computing, the device data in the SHSG domain has special characteristics. These characteristics should be considered for the design of an appropriate communication architecture. Thus, the focus of this research is on an efficient communication for discovering and delivering device data in an SHSG Intercloud scenario. Therefore, we present an architecture introducing an Intercloud Service (ICS) on top of the vendor clouds. An evaluation methodology is proposed to investigate the efficiency of the chosen solution for the ICS.

Philipp Grubitzsch
Dynamics in Linked Data Environments

Linked Data resources represent things in the world, which change over time, i. e. the resources are dynamic. We want to address open questions in building complex applications that consume and interact with Linked Data resources. Applications that consume Linked Data need to know about the dynamics of Linked Data to efficiently obtain fresh data to build their logic on. To address this problem of freshness, we set up a study to observe the dynamics of Linked Data on the Web and provide insights into the dynamics of Linked Data on the Web. Increasingly, the Web not only provides read-only Linked Data resources, but also writeable Linked Data resources. We develop a model for dynamics in the context of writeable Linked Data such that complex applications can be built on this model. We last present an observer that tracks the execution of applications that interact with multiple Linked Data resources.

Tobias Käfer

Industry Case Studies Program (ICSP) 2015: ICSP 2015 PC Chair’s Message

Frontmatter
Continuous Data Collection Framework for Manufacturing Industries

The combination of high-performance and quality with cost-effective productivity, realizing reconfigurable, adaptive and evolving factories leading to sustainable manufacturing industries is one of the emerging research challenges in the domain of Internet of Things. So, it is important to define strategies and technical solutions to allow manufacturing process to autonomously react to the changing factors. Continuous data collection from heterogeneous sources and infusion of such data into suitable processes to enable almost realtime reactive systems is emerging research domain. This research work provides a technical framework for continuous data collection in support of the supply network and manufacturing assets optimization based on collaborative production planning. This research work explores its connection to legacy systems, software components and hardware devices to provide information to the different data consumers. The most relevant achievement of this work is the framework to bridge differences among computing systems, devices and networks allowing uniformity for data access by application decoupling data consumers from data sources.

Sudeep Ghimire, Raquel Melo, Jose Ferreira, Carlos Agostinho, Ricardo Goncalves
Determination of Manufacturing Unit Root-Cause Analysis Based on Conditional Monitoring Parameters Using In-Memory Paradigm and Data-Hub Rule Based Optimization Platform

Different manufacturing plants have their disparate process of conditional monitoring for their processing units with diverse set of sensors which amounts to petabytes of data. This leads to conglomeration of problems limited not only to data management but also lack in ability to present the overall point of view of the critical observations at a real time scenario. Some of these observations impact the business revenue/cost process and due to lack of aggregation efforts, the overview is not available to the decision makers. With the intention of highlighting the critical performance factors and applied techniques we present the overall view point of a platform which will address such issues and assist decision makers with certain data points to make choices leading to profitability and sustainability of their business outlook. The platform will serve as a single point of aggregation for diversified but correlated data points and various custom data logic applied as per the business rules providing a correlation between data. This correlation will help reach an outcome where the user can trace the source of data point of the concerned manufacturing units where technical parameters can be optimized. This data flow and processing will result in root cause identification using the data hub platform and real time analytics can be made available using in-memory column store database approach.

Prabal Mahanta, Saurabh Jain
“Wear” Is the Manufacturing Future: The Latest Fashion Hitting the Workplace

The evolution of technology continues to bring socially accepted gadgets such as the smartwatch and other wearable devices into the world of work. The manufacturing sector along with most other sectors is bracing itself for all the functions that are readily acceptable in our daily lives to become part of the armoury of the shop floor personnel. Some of the simple forms of safety clothing such as gloves and safety glasses are mandatory when working in many manufacturing environments so it’s an easy step to add the technology layers to these wearables to create the new Cyber links between people and their machines. In order to deploy these technologies however, we need to understand the barriers that hold back the implementation of these Cyber Physical Systems and the steps necessary for the full implementation Human-Machine linkages from Cloud systems to shop floor environments.

Gurbaksh Bhullar
Evaluating the Utilization of the ProcessGene Repository for Generating Enterprise-Specific Business Processes Models

Generic reference models are based on the assumption of similarity between enterprises - either cross industrial or within a given sector. They are formed mainly in order to assist enterprises in constructing their own, specific process models. The research presents an empirical evaluation of the quality of the ProcessGene process repository in generating individualized models.

Maya Lincoln, Avi Wasser
Impact of Internet of Things in the Retail Industry

The rise of Internet of Things has led to many game changing efforts to create new business models and opportunities in various domains of industry. In this paper, we look into the impact that the concept of IoT will bring in Retail Industry in coming years with the point of view of new business outlook based upon parameters of security, reliability, integration, discoverability, and interoperability. The paper also presents new concepts that can be implemented for business profitability using various IoT Technologies with prime focus on the areas of embedded systems, cyber physical systems, generic sensors and security. In relation to the Retail Industry, the focus areas of development and support of IoT technology will shift from a mere data collection to knowledge creation which can enable value chain development using a framework concentrating more from a legal point of view. Not only has the technology paradigm shifted from a certain POV but businesses also changes in terms of scalability, dynamicity, heterogeneity and interconnectivity. The paper discusses about newer ideas and their business and social impact on the industry in terms of profitability and adaptability.

Pradeep Shankara, Prabal Mahanta, Ekta Arora, Guruprasad Srinivasamurthy
An Internet of Things (IoT) Based Cyber Physical Framework for Advanced Manufacturing

This paper outlines an IoT based collaborative framework which provides a foundation for cyber physical interactions and collaborations for advanced manufacturing domains; the domain of interest is the assembly of micro devices. The design of this collaborative framework is discussed in the context of IoT networks, cloud computing as well as the emerging Next Internet which is the focus of recent initiatives in the US, EU and other countries. A discussion of some of the key cyber physical resources and modules is outlined followed by a discussion of the implementation and validation of this framework.

Yajun Lu, J. Cecil

International Workshop on Enterprise Integration, Interoperability and Networking (EI2N) 2015: EI2N’2015 Co-Chairs’ Message

Frontmatter
Subject-Oriented BPM as the Glue for Integrating Enterprise Processes in Smart Factories

This paper presents how an existing approach to business process management, Subject-oriented BPM (S-BPM), provides a foundation for seamlessly integrating processes in production enterprises, from business processes to real-time production processes. The applicability of S-BPM is based on its simplicity and encapsulation of separate process domains. This supports agility as all stakeholders can be engaged and the effects of changes can be limited to individual modules of the process. An application and tool support developed in an ongoing European research project are presented to illustrate the approach.

Udo Kannengiesser, Matthias Neubauer, Richard Heininger
Extended Service Modelling Language for Enterprise Network Integration

With the development of globalization and information technology, service oriented technologies and enterprise networks are arising quickly. Supply chain management, virtual enterprise, dynamic alliance, e-business and so forth are leading intra enterprise management and networked enterprise management to the enterprise networks management and integration. At the same time, Service Oriented Architecture (SOA) triggers series of servicization progresses in enterprise management, inter enterprise cooperation, manufacturing process, as well as IT related infrastructures. CEN/TC 310/WG 1 is developing the Service Modelling Language (SML) for Virtual Manufacturing Enterprises (VMEs). Based on Model Driven Architecture, SML tries to specify the Business Service Modelling (BSM) level of the Model Driven Service Engineering Architecture (MDSEA). In the paper, MDSEA and SML are extended for enterprise networks integration. A three levels modelling framework including business process modelling, service process modelling and operation process modelling are introduced. The Collaboration Point (CP) concept is developed to describe cooperation mechanism between enterprises so as to overcome the process and data fragmentation caused by enterprise organizational boundaries. The model mapping method among the three levels are also presented in the paper.

Qing Li, Peixuan Xie, Xiaoqian Feng, Hongzhen Jiang, Qianlin Tang
SAIL: A Domain-Specific Language for Semantic-Aided Automation of Interface Mapping in Enterprise Integration

Mapping elements of various interfaces is one of the most complex tasks in enterprise integration. Differences in the ways that these interfaces represent data in lead to the need of conflict detection and resolving. We present an approach where a structural model of the interfaces can be annotated with a semantic model and used together to (semi-)automate this process. A domain-specific language (DSL) is proposed that can be used to specify criteria for interface element mapping, define conflicts with steps for their resolution if possible, and how the resulting mappings will be translated into expressions needed for code generation. This DSL is intended to give the user the possibility to customise a prototype tool (which we have presented earlier) enabling us to practically test our approach and yield a real-world runnable implementation. Code generated by this tool is deployable to an enterprise service bus (ESB).

Željko Vuković, Nikola Milanović, Renata Vaderna, Igor Dejanović, Gordana Milosavljević
Propelling SMEs Business Intelligence Through Linked Data Production and Consumption

The introduction of the linked data concepts to SMEs, coupled with sophisticated analytics and visualizations deriving through an integrated environment, called the LinDA Workbench, reduces the effort of specific workflows within a company, by almost 50% in terms of time, while its major benefit is the introduction of new, innovative, business models and values in the SMEs’ service provisioning. In this manuscript, the initial findings of the Business Intelligence Analytics (BIA) pilot operation of the LinDA project is discussed, which concerns the examination of the effects of Over-The-Counter (OTC) medicines liberalisation in Europe. The analysis aims at identifying correlations between pharmaceutical, healthcare, socio-economic and political parameters and introduces several research questions, which the present paper aims to answer, such as: Are the linked data useful for SMEs? Which are the benefits of integrating them in its operational environment? Are the analysis results of such a scenario meaningful for the SME service provisioning?

Barbara Kapourani, Eleni Fotopoulou, Dimitris Papaspyros, Anastasios Zafeiropoulos, Spyros Mouzakitis, Sotirios Koussouris
A Domain Specific Language for Organisational Interoperability

In the

SUddEN

(SMEs Undertaking Design of Dynamic Ecosystem Networks) project a web-based environment has been researched supporting automotive SMEs (Small and Medium Sized Enterprises) with respect to organisational interoperability using performance measurement systems. Using a CAS (Complex Adaptive Systems) point of view, we have implemented a Domain Specific Language and interoperability support services implementing the

SUddEN

frame of reference. Extending interoperability support to the process and data level, this environment enables simulating processes and data transfers.

Georg Weichhart, Christian Stary
Understanding Personal Mobility Patterns for Proactive Recommendations

This paper proposes an innovative methodology for extracting and learning personal mobility patterns. The objective is to award daily commuters in a city with personalized and proactive recommendations, related with their mobility habits on a daily basis. In currently approaches, users have to explicitly provide their routes (origin, destination and date/time) to a routing engine in order to be notified about traffic events. The proposed approach goes beyond and learns daily mobility habits from the users, without the need to provide any information. The work presented here, is currently being addressed under the EU OPTIMUM project. Results achieved establish the basis for the formalization of the OPTIMUM domain knowledge on personal mobility patterns.

Ruben Costa, Paulo Figueiras, Pedro Oliveira, Ricardo Jardim-Goncalves
A Real-Time Architecture for Proactive Decision Making in Manufacturing Enterprises

We outline a new architecture for supporting proactive decision making in manufacturing enterprises. We argue that event monitoring and data processing technologies can be coupled with decision methods effectively providing capabilities for proactive decision-making. We present the main conceptual blocks of the architecture and their role in the realization of the proactive enterprise. We illustrate how the proposed architecture supports decisionmaking ahead of time on the basis of real-time observations and anticipation of future undesired events by presenting a practical condition-based maintenance scenario in the oil and gas industry. The presented approach provides the technological foundation and can be taken as a blueprint for the further development of a reference architecture for proactive applications.

Alexandros Bousdekis, Nikos Papageorgiou, Babis Magoutas, Dimitris Apostolou, Gregoris Mentzas
Osmotic Event Detection and Processing for the Sensing-Liquid Enterprise

The Sensing-Liquid Enterprise paradigm enhances sensing capabilities of the Sensing Enterprise with fuzzy boundaries of the Liquid Enterprise by interconnecting real, virtual and digital worlds through semi-permeable membrane behavior. Shadow images of the different worlds need to be kept consistent. Osmotic data flows between the real, digital and virtual world allow events to break out of their inner world behavior and to advance to inter-world events.

This paper combines semantic web technologies and complex event processing to enable osmotic event detection and processing for the Sensing- Liquid Enterprise. Events are enriched with semantic information and examined for inter-world relevance. The presented approach is accompanied with its application in the OSMOSE Project.

Artur Felic, Spiros Alexakis, Carlos Agostinho, Catarina Marques-Lucena, Klaus Fischer, Michele Sesana
PiE - Processes in Events: Interconnections in Ambient Assisted Living

In the era of Internet of Things (IoT), sensors distributed in the environment can provide essential information to be exploited. In this work we propose to exploit the advantage of a sensor-enriched environment for supporting the processes of several cooperating organizations. Our approach, PiE (Processes in Events), aims to identify and exploit interconnections between processes, without demanding the restructuring of their inner structure. Starting from a set of events generated by sensors and business processes (BPs), we propose a methodology for multiple process annotation. From the analysis of the events correlations, we can discover interconnections among processes of several organizations involved in the same goal and derived additional information about the processes being executed. An example within an Ambient Assisted Living (AAL) scenario is studied, where several organizations cooperate to provide social and health care to a subject.

Monica Vitali, Barbara Pernici

International Workshop on Fact Based Modeling (FBM) 2015: FBM PC Co-Chairs’ Message

Frontmatter
Developing and Maintaining Durable Specifications for Law or Regulation Based Services

The Netherlands has enacted many laws. The responsibility for the execution of associated services and enforcement of this legislation is assigned to a substantial number of governmental bodies. Where possible, the associated services are performed digitally. The interaction between citizens/businesses and government as often implicitly described in legislation is the basis for the durable specifications of these services. One of the organisations in charge of developing and delivering these services is the Dutch Tax and Customs Administration (DTCA). DTCA is facing several challenges in developing and maintaining durable specifications for these services. This paper proposes to combine FBM with the case based semantics of rules for durable specifications. The need was established to extend the work of Hohfeld with a clear distinction between legal relations and legal acts, add time travel and the strong connection between the expected and actual cases with the laws and regulation.

Diederik Dulfer, Sjir Nijssen, Mariette Lokin
Fact Based Legal Benefits Services

One of the organisations in charge of developing and delivering legal services in the Netherlands is the Dutch Tax and Customs Administration (DTCA). Next to Tax and Customs, DTCA is also responsible for benefits (like housing allowance, health care insurance allowance, childcare allowance). The benefits services of DTCA were one the first services which were based on a fact based approach. This paper shows how the benefits services of DTCA are designed, tells about the current challenges and future plans.

Gert Veldhuijzen van Zanten, Paul Nissink, Diederik Dulfer
Integrating Modelling Disciplines at Conceptual Semantic Level

For an organization to maintain (or achieve) a competitive edge and to be continuously compliant with ever changing regulations, it is necessary that it can react in a timely and cost-effective fashion to changes in the immediate environment which affects its business. This can only be achieved if the organization has an effective grip on its total body of knowledge.

In order to get grip on its knowledge, an organization needs insight and control in the way the overall goals of the organization and the associated laws and regulations are translated into an operational way of working. As new technologies are introduced frequently, it is of great interest to have a sustainable knowledge description of the operational way of working, independent of any physical realization, including a two-way audit trail (from source to implementation and back).

In this paper we explain that for an organization to gain true insights in its operations, it is not enough to create independently conceived process, information and rules models, but that it is of importance to gain insight in (and thus understanding of the relationships between these different models.

Inge Lemmens
Using Fact-Based Modelling to Develop a Common Language
A use case

In today’s business environment, the challenges an organization has to face have increased in amount and complexity. Not only competition has become tougher, organizations, and in particular financial institutions have to fulfil an increasing amount of regulations imposed by external organizations. To fulfil these legal obligations, a common understanding is required to remove ambiguities within the organizations and to ensure correct reporting.

A common understanding is achieved through the use of a common language in which each relevant term is foreseen of a single Definition that contains no ambiguity such that the risk of misinterpretation is reduced drastically and the time spent on research in case of a new reporting query coming from a (change of) legislation decreases. In this paper, it is explained how fact-based modelling is used to develop the common understanding, by using the fact types as the basic building blocks for the Definitions.

Inge Lemmens, Jan Mark Pleijsant, Rob Arntz
Achieving Interoperability at Semantic Level

Developing and operating Space Systems involves complex activities, involving many parties distributed in location and time. This development requires efficient and effective interoperability during the overall Space System development and operations lifecycle.

Interoperability is often described from a syntactic viewpoint, focusing on data exchange formats. While syntactic interoperability is required, the prerequisite for any successful information exchange is to ensure that all actors involved share a common understanding of the information that will be exchanged. This aspect of interoperability is known as semantic interoperability. Semantic interoperability focusses on “what” is being exchanged, while syntactic interoperability focuses on “how” it is being exchanged.

The need for semantic interoperability is described in [1], which is developed by the European Cooperation for Space Standardization (ECSS). In this technical memorandum, the concept of a “global conceptual data model” as a means to achieve the required semantic interoperability is introduced. Factbased modelling is introduced as the means to develop a global conceptual data model.

In this paper, we address the issues of semantic interoperability and describe the developments on-going at the European Space Agency (ESA) for fully supporting semantic interoperability.

Inge Lemmens, Jean-Paul Koster, Serge Valera
The Great Work of Michael Senko as a Part of a Durable Application Model

This paper defines a language for expressing a durable semantic application model as major part of the specifications for business applications This language consists of the modeling constructs from Natural Language Modeling (NLM). It will be shown in this article how these modeling constructs constitute a hierarchy within any durable semantic application model.

Peter Bollen
The Evolution Towards a Uniform Referencing Mode in Fact-Based Modeling

Since its inception in the 1970’s, fact-based modeling has evolved. In this article we will sketch this evolution mainly from the referencing mode’s point of view. In this article we compare referencing modes from the N-ary 1989 NIAM model to the contemporary incarnation of fact-based modeling known as CogNIAM.

Peter Bollen
CogniLex
A Legal Domain Specific Fact Based Modeling Protocol

The Dutch Government has decided to provide services and enforcement actions based on laws and decrees. The vast majority of these services and actions are IT-based. The union of laws and decrees, both government and ministerial, are collectively called regulation hereafter. So far the large regulation based projects often have been running in the hundreds of millions over budget. Of course the Dutch parliament has held more than one investigation how to avoid this overspending, but there is still a lot to be improved. A number of government services organizations, academia and innovative companies has decided in 2012 to establish a co-creation with the aim to develop a national protocol to “translate the regulation” into a durable model that can better be used in intelligent work than the conventional textual representation. The cocreation, called “The Blue Chamber”, has so far issued two reports in Dutch and presented 5 papers at international conferences. In this paper we describe the conceptual architecture of The Blue Chamber and how the CogNIAM variant of Fact Based Modeling has been used to develop a protocol that is tuned to this specific but extremely large domain of legislation.

Mariette Lokin, Sjir Nijssen, Inge Lemmens
Business Object Model: A Semantic-Conceptual Basis for Transition and Optimal Business Management

Obvion is a residential mortgage loans provider operating on the Dutch market. A recent internal Obvion study into the target architecture of their services recommends to provide further elaboration of an Obvion Business Object Model (BOM). This model comprises the business terms used within Obvion, their meaning, interconnections and related regulations. This model would offer the upcoming transition projects - where new designs are created for existing applications - a reliable and sufficiently detailed starting point. This has significant positive effects on the reduction of errors, time and costs in the implementation.

A proven approach was chosen for the implementation of the Obvion BOM. For the model the CogNIAM variant of Fact Based Modeling is used that allows the knowledge present in documentation, applications and people’s minds to be elicited and represented in a completely structured fashion. To capture the knowledge in a model the OMG’s open knowledge standard SBVR is applied.

Roel Moberts, Ralph Nieuwland, Yves Janse, Martijn Peters, Hennie Bouwmeester, Stan Dieteren
A Sustainable Architecture for Durable Modeling of Laws and Regulations and Main Concepts of the Durable Model

The authors have been involved in durable modeling of laws and regulations to be used as the formal, testable and in a multi-disciplinary group understandable requirements for law and regulation based services. At least one co-creation initiative in the Netherlands has decided to develop an extended protocol for durable modeling of laws and regulations. The vast majority of these services and actions are information-intensive and require a substantial IT effort. The main ideas underlying the protocol developed in the last three years in the Blue Chamber are described. Durable modeling of laws and regulations can only be practically applied, whenever the result is recognizable by stakeholders and can be used for the modelling of services based on these laws and regulations. To test this assumption, we illustrate the protocol using the new Dutch environment planning act.

Marco Brattinga, Sjir Nijssen
1975-2015: Lessons Learned with Applying FBM in Professional Practice

Around 1975 the first FBM conceptual modeling services were introduced in Europe. A few years later Control Data Corporation introduced Information Analysis Services in North America. What have the various professionals introducing or applying FBM learned in that period? In this paper we will focus on the experiences obtained in The Netherlands and Belgium in Europe and Canada and USA in North America. What do business persons ask? What do they like in FBM? What do they dislike in FBM? This paper summarizes the experiences of two FBM practitioners during the period 1975-2015.

Sjir Nijssen, Baba Piprani
Developing the Uniform Economic Transaction Protocol
A Fact Based Modeling Approach for Creating the Economic Internet Protocol

The global economy faces a growing number of interfaces for economic transactions to take place. As more and more types of transactions occur in the information age, more regulations, guidelines and IT approaches are created to handle these transactions. This results in increasing costs, cumbersome processes and higher entry barriers for small and new players. The Uniform Economic Transaction Protocol (UETP) is a free, open source protocol for an open global network for economic transactions; the economic internet. A uniform economic language will enable an aligned, real time connectivity which increases trust, decreases costs and the number of needed connections. It makes it possible for all economic parties (buyer, seller, banks, fiscal authorities, logistics providers etc.) to speak the same economic language and connect to the online economic transaction. In this paper we describe UETP and how the cogNIAM variant of Fact Based Modeling has been used to develop UETP.

Johan Saton, Floris Kleemans

Workshop on Industrial and Business Applications of Semantic Web Technologies (INBAST) 2015: INBAST 2015 PC Co-Chairs’ Message

Frontmatter
Benchmarking Applied to Semantic Conceptual Models of Linked Financial Data

Semantic modeling plays a central role in knowledge-based systems where information sharing and integration is a primary objective. Ontology and metadata description languages such as OWL (Web Ontology Language) and RDF(S) (Resource Description Framework Schema) are commonly the most used for representing semantic models and data. The graph-like structure adopted for semantic metadata representation allows simple and expressive queries by using SPARQL-based subgraph matching. While performance of such knowledgebased systems depends on multiple factors, in this work we present a mechanism to properly choice a semantic modeling pattern in order to significantly reduce the data query execution time. Based on this understanding, this work proposes a comparative analysis of different conceptual modeling approaches on the basis of financial domain. In order to show the efficiency/accuracy of our approach, an evaluation of SPARQL-based queries was performed against different modeled datasets.

José Luis Sánchez-Cervantes, Lisbeth Rodríguez-Mazahua, Giner Alor-Hernández, Cuauhtémoc Sánchez-Ramírez, Jorge Luis García-Alcaráz, Emilio Jimenez-Macias
Ontology-Driven Instant Messaging-Based Dialogue System for Device Control

The im4Things platform aims to develop a communication interface for devices in the Internet of the Things (IoT) through intelligent dialogue based on written natural language over instant messaging services. This type of communication can be established in different ways such as order sending and, status querying. Also, the devices themselves are responsible for alerting users when a change has been produced in the device’s sensors. The system has been validated and it has obtained promising results.

José Ángel Noguera-Arnaldos, Miguel Ángel Rodriguez-García, José Luis Ochoa, Mario Andrés Paredes-Valverde, Gema Alcaraz-Mármol, Rafael Valencia-García
Ontology-Based Integration of Software Artefacts for DSL Development

This paper addresses a high level semantic integration of software artefacts for the development of Domain Specific Languages (DSL). The solution presented in the paper utilizes a concept of DSL meta-model ontology that is defined in the paper as consisting of a system ontology linked to one or more domain ontologies. It enables dynamic semantic integration of software artefacts for the composition of a DSL meta-model. The approach is prototypically implemented in Java as an extension to the DSL development tool CoCoViLa.

Hele-Mai Haav, Andres Ojamaa, Pavel Grigorenko, Vahur Kotkas
Towards an Adaptive Tool and Method for Collaborative Ontology Mapping

Linked Data makes available a vast amount of data on the Semantic Web for agents, both human and software, to consume. Linked Data datasets are made available with different ontologies, even when their domains overlap. The interoperability problem that rises when one needs to consume and combine two or more of such datasets to develop a Linked Data application or mashup is still an important challenge. Ontology-matching techniques help overcome this problem. The process, however, often relies on knowledge engineers to carry out the tasks as they have expertise in ontologies and semantic technologies. It is reasonable to assume that knowledge engineers should require help from the domain experts, end users, etc. to contribute in the validation of the results and help distilling ontology mappings from these correspondences. However, the current design for the ontology-mapping tools does not take into consideration the different types of users expected to be involved in the creation of Linked Data applications or mashups. In this paper, we identify the different users and their roles in the mapping involved in the context of developing Linked Data mashups and propose a collaborative mapping method in which we prescribe where collaboration between the different stakeholders could, and should, take place. In addition, we propose a tool architecture based on bringing together an adaptive interface, mapping services, workflow services and agreement services that will ease the collaboration between the different stakeholders. This output will be used in an ongoing study to constructing a collaborative mapping platform.

Ramy Shosha, Christophe Debruyne, Declan O’Sullivan
Data Access in Cloud HICM Solutions. An Ontology- Driven Analytic Hierarchy Process Based Approach

Data federation and virtualization in SaaS environments is one of the issues present in cloud environments. Within human resource management information systems, there is a huge amount of data that other corporate applications are needed to access and manipulate. Meta4 is one of the leaders in Human and intellectual capital sectors both in SaaS and on premise scenarios. This paper presents a framework to select the best option for data access in SaaS environment from the developer side using the Analytic Hierarchy Process enriched by an ontological representation of it. This framework would be useful for all SaaS service providers willing to open SaaS data to their customers.

Ricardo Colomo-Palacios, Eduardo Fernandes, Juan Miguel Gómez-Berbís
Knowledge Management for Virtual Education Through Ontologies

Current knowledge management focuses on knowledge acquisition, storage, retrieval and maintenance. E-Learning systems technology today is used primarily for training courses about carefully selected topics to be delivered to students registered for those courses. Knowledge management is used to rapidly capture, organize and deliver large amounts of corporate knowledge. The practice of adding value to information by capturing tacit knowledge and converting it into explicit knowledge is known as Knowledge Management. Ontologies can represent an existing knowledge from a domain, in this work; ontologies allow to model different aspects of knowledge management for virtual education in higher education. Universities, from the perspective of knowledge management, provide an updated concept of higher education where knowledge is considered a product, and the customers are students. This paper explains an ontological framework, used to model and integrate knowledge management processes and technological architecture for knowledge management in virtual education.

Ana Muñoz, Victor Lopez, Katty Lagos, Mitchell Vásquez, Jorge Hidalgo, Nestor Vera
Semantic Model of Possessive Pronouns Machine Translation for English to Bulgarian Language

The paper presents technique to interpret possessive pronouns for English to Bulgarian language syntax-based machine translation. It uses Universal Networking Language (UNL) as a formal framework to present possessive pronouns grammar features by employing semantic networks for both English and Bulgarian language. The technique includes also use of statistically-based estimation of accuracy of translation by measuring precision and recall of both training and controlled electronic text corpora and by improving related grammar rules.

Velislava Stoykova
Machine-Assisted Generation of Semantic Business Process Models

Generic reference models are based on the assumption of similarity between enterprises - either cross industrial or within a given sector. The research suggests a machine assisted methodology and tools for the design and generation of individualized business process models based on generic repositories. An empirical evaluation is included to assess the effectiveness of the proposed method.

Avi Wasser, Maya Lincoln

International Workshop on Information Systems in Distributed Environment (ISDE) 2015: ISDE 2015 PC Co-Chairs’ Message

Frontmatter
Tool Chains in Agile ALM Environments: A Short Introduction

The article highlights tool integration within the Agile Application Lifecycle Management (ALM) environments. An essential ingredient of an effective agile ALM process is concerned with the techniques used to form the coalitions of tools that support some or all of its activities. This article aims to address the problem faced by practitioners while establishing a tool chain environment, aligned with development process and culture. To provide practical step wise information on the creation of a tool chain, we have explored how the ALM process model can be used for creating a skeleton of specialized tools. We identify a set of proposed criteria for tool selection and show how tools can be set up on different development platforms.

Saed Imran, Martin Buchheit, Bernhard Hollunder, Ulf Schreier
Scale Up Internet-Based Business Through Distributed Data Centers

Distributed data centers are becoming more and more important for internet-based companies. Without distributed data centers, it will be hard for internet companies to scale up their business. The traditional centralized data center suffers the drawback of bottle neck and single failure problem. Therefore, more and more internet companies are building distributed data centers, and more and more business are moved onto distributed Web services. This paper reviews the history of distributed Web services and studies their current status through examining the distributed data centers of several top Internet companies. Based on the study, we conclude that distributed services, including distributed data centers, are the key factors to scale up the business of a company, especially, an internet-based company.

Liguo Yu, Alok Mishra, Deepti Mishra
Distributed Architecture for Supporting Collaboration

The research concerned by this paper is the design and implementation of a distributed digital system supporting participants during co-located collaborative sessions. In many companies, traditional methods are still used. The challenge is to propose a computer supported solution for helping the capitalization of knowledge without impacting the efficiency of such meetings. The paper presents the architecture of a distributed system involving different types of devices for main support a large multi-touch screen. The distributed architecture relies on a multi-agent system whose agents listen events from the applications and exchange messages for performing the functionalities of the system like synchronization and persistence. The technology used for the development of the applications and the agents is also detailed. The last section of the paper introduces experiments that have been achieved thanks to this system.

Claude Moulin, Kenji Sugawara, Yuki Kaeri, Marie-Hélène Abel
Evolving Mashup Interfaces Using a Distributed Machine Learning and Model Transformation Methodology

Nowadays users access information services at any time and in any place. Providing an intelligent user interface which adapts dynamically to the users’ requirements is essential in information systems. Conventionally, systems are constructed at the design time according to an initial structure and requirements. The effect of the passage of time and changes in users, applications and environment is that the systems cannot always satisfy the user’s requirements. In this paper a methodology is proposed to allow mashup user interfaces to be intelligent and evolve over time by using computational techniques like machine learning over huge amounts of heterogeneous data, known as big data, and modeldriven engineering techniques as model transformations. The aim is to generate new ways of adapting the interface to the user’s needs, using information about user’s interaction and the environment.

Antonio Jesus Fernandez-Garcia, Luis Iribarne, Antonio Corral, James Z. Wang
An Architectural Model for System of Information Systems

One of the most important aspects when designing and constructing an Information System is its architecture. This also applies to complex systems such as System of Information Systems (SoIS). Thus, we aim to propose an architectural model of System of Information Systems (SoIS). Though Architecture-based approaches have been promoted as a means of controlling the complexity of systems construction and evolution, what we really look for in this paper is an architectural model to aggregate services from already constructed systems. Nevertheless, it would be a good practice to compare the presented architecture of SoIS to other architecture-based approaches such as Service Oriented Architecture (SOA). Also, it is beneficial to examine how we can use the well-established standards of SOA for the designing of SoIS. In this paper we present an architectural model for System of Information Systems, and highlight the standards of Service Oriented Architecture that might help us in this task.

Saleh Majd, Abel Marie-Hélène, Mishra Alok
A Software Development Process Model for Cloud by Combining Traditional Approaches

Even though cloud computing is a technological paradigm that has been adopted more and more in various domains, there are few studies investigating the software development lifecycle in cloud computing applications and there is still not a comprehensive software development process model developed for cloud computing yet. Due to the nature of cloud computing that is completely different from the traditional software development, there is a need of suggesting process models to perform the software development systematically to create high quality software. In this study, we propose a new conceptual Software Development Life Cycle Model for Cloud Software Development that incorporates characteristics of different process models for traditional software development. The proposed model takes traditional model’s specific characteristics into account and also considers cloud’s specific nature i.e. advantages and challenges as well.

Tuna Hacaloglu, P. Erhan Eren, Deepti Mishra, Alok Mishra
Semantic Matching of Components at Run-Time in Distributed Environments

Software factories are a key element in Component-Based Software Engineering due to the common space provided for software reuse through repositories of components. These repositories can be developed by third parties in order to be inspected and used by different organizations, and they can also be distributed in different locations. Therefore, there is a need for a trading service that manages all available components. In this paper, we describe a matching process based on syntactic and semantic information of software components. This matching operation is part of a trading service which is in charge of generating configurations of components from architectural definitions. With this aim, the proposed matching allows us to evaluate and score the possible configurations, thus guiding a search process to build the architectural solution which best fulfills an input definition.

Javier Criado, Luis Iribarne, Nicolás Padilla, Rosa Ayala

Workshop on Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society (META4eS) 2015: META4eS 2015 PC Co-Chairs’ Message

Frontmatter
Creating and Consuming Metadata from Transcribed Historical Vital Records for Ingestion in a Long-Term Digital Preservation Platform
(Short Paper)

In the Irish Record Linkage 1864-1913 (IRL) project, digital archivists transcribe digitized register pages containing vital records into a database, which is then used to generate RDF triples. Historians then use those triples to answer some specific research questions on the IRL platform. Though the triples themselves are a highly valuable asset that can be adopted by many, the digitized records and their RDF representations need to be adequately stored and preserved according to best standards and guidelines to ensure those do not get lost over time. This was a problem currently not investigated within this project. This paper reports on the creation of Qualified Dublin Core from those triples for ingestion with the digitized register pages in an adequate long-term digital preservation platform and repository. Rather than creating RDF only for the purpose of this project, we demonstrate how we can distill artifacts from the RDF that is fit for discovery, access, and even reuse via that repository and how we elicit and conserve the knowledge and memories about Ireland, its history and culture contained in those register pages.

Dolores Grant, Christophe Debruyne, Rebecca Grant, Sandra Collins
CoolMind: Collaborative, Ontology-Based Intelligent Knowledge Engineering for e-Society

The paper proposes a collaborative and ontology-based approach for knowledge engineering adapted to e-Society applications. The approach aims to provide intuitive user-system and user-content interaction via processing and visualization of Big Data and the resulting information. Moreover, the approach ensures secured access to personal data via semantic interoperability of security policies. The approach considers large size structured and unstructured contents such as multimedia content (videos, images), web content, video games, sensor data, medical health records, medical databases, etc. We will showcase the concept with an (ongoing) medical decision support system (DS-Med) targeted at real end users and real-world test data from around the city of Cluj-Napoca.

Ioana Ciuciu, Bazil Pârv
Improving Software Quality Using an Ontology-Based Approach

The paper aims to define a novel methodology to evaluate the quality of software systems. The methodology is applying the following steps: evaluate object oriented metrics, select quality category, and evaluate quality category. The approach to software quality evaluation is based on an ontology that was defined for the ISO25010 standard. The quality evaluation is enhanced by taking into consideration several object-oriented (OO) metrics and including them into the ontology. A case study is presented regarding the impact of OO metrics on the reliability category. The paper presents preliminary results on the methodology and concludes on the benefits of using an ontology to derive new facts in the context of software quality assessment.

Simona Motogna, Ioana Ciuciu, Camelia Serban, Andreea Vescan
Historical Data Preservation and Interpretation Pipeline for Irish Civil Registration Records

Semantic Web technologies give us the opportunity to understand today’s data-rich society and provide novel means to explore our past. Civil registration records such as birth, death, and marriage registers contain a vast amount of implicit information which can be revealed by structuring, linking and combining that information with other datasets and bodies of knowledge. In the Irish Record Linkage (IRL) Project 1864-1913, we have developed a data preservation and interpretation pipeline supported by a dedicated semantic architecture. This three-layered pipeline is designed to capture separate concerns from the perspective of multiple disciplines such as archival studies, history and data science. In this study, our aim is to demonstrate best practices in digital archives, while facilitating innovative new methodologies in historical research. The designed pipeline is executed with a dataset of 4090 registered Irish death entries from selected areas of south Dublin City.

Oya Beyan, P. J. Mealy, Dolores Grant, Rebecca Grant, Natalie Harrower, Ciara Breathnach, Sandra Collins, Stefan Decker
Academic Search. Methods of Displaying the Output to the End User

The present paper spans the task of structured representation of search results in developed academic search system. Main contribution of our work is integration into the search process both: 1) structured visualization of search results (clustering and topic graph); 2) providing information about yearly dynamics of topics for query results. The latter makes the system more flexible and suitable for data monitoring. The system can be used not only for academic search, but also for foresight studies.

Svetlana Popova, Ivan Khodyrev, Artem Egorov, Vera Danilova
Estimating Keyphrases Popularity in Sampling Collections

The problem of structured representation of data has high practical value and is particularly relevant due to growth of data volume. Such methods of data representation as topic graphs, concepts trees, etc. is a convenient way to represent information retrieved from a collection of documents. In this paper, we research some aspects of using a collection of samples for the evaluation of the popularity of concepts. The latter can be used to visualize concept significance and concept ranking in the tasks of structured representation.

Multi-word phrases are considered as concepts. We address the case when these phrases are automatically extracted from the processed document collection. The popularity of a concept (e.g., visually can be presented as the size of the vertex in the topic graph) is judged by the number of documents containing this phrase. We elaborate the case when a sample from the document collection is used to estimate concept popularity. For this case we estimate how permissible is such representation of data, reflecting the proportions of the number of documents containing specific concepts. A frequency-based criterion and the procedure of its calculation is described in the paper. This helps to estimate the expedience of concept popularity representation in respect to the popularity of other concepts. The main aspect here is to establish the criteria when relations between values of concepts popularity in a sample are the same as in the population, and to establish the criterion for selecting n high-frequency concepts which have the same sample rank and frequency distributions as in the population.

Svetlana Popova, Gabriella Skitalinskaya, Ivan Khodyrev
Semantic HMC: Ontology-Described Hierarchy Maintenance in Big Data Context

One of the biggest challenges in Big Data is the exploitation of Value from large volumes of data that are constantly changing. To exploit value, one must focus on extracting knowledge from these Big Data sources. To extract knowledge and value from unstructured text we propose using a Hierarchical Multi-Label Classification process called Semantic HMC that uses ontologies to describe the predictive model including the label hierarchy and the classification rules. To not overload the user, this process automatically learns the ontologydescribed label hierarchy from a very large set of text documents. This paper aims to present a maintenance process of the ontology-described label hierarchy relations with regards to a stream of unstructured text documents in the context of Big Data that incrementally updates the label hierarchy.

Rafael Peixoto, Christophe Cruz, Nuno Silva
A Conceptual Relations Reference Model to the Construction and Assessment of Lightweight Ontologies

Lightweight ontologies are increasingly used by domain experts in a variety of activities. The development of these knowledge representation artefacts carries the challenge of the proper conceptualisation of the considered reality/situation. In particular, the elicitation of conceptual relations is widely acknowledged in the literature as being the most difficult part of the process. This paper proposes a technique to support domain experts in the elicitation of conceptual relations in the development of lightweight ontologies by providing the means to develop and ascertain the consistency of the produced conceptualisation results towards its reusability. Its innovation comes from the development of a reference model for the conceptual relations elicitation.

Cristóvão Sousa, António Soares
Motivators and Deterrents for Data Description and Publication: Preliminary Results (Short Paper)
(Short Paper)

In the recent trend of data-intensive science, data publication is essential and institutions have to promote it with the researchers. For the past decade, institutional repositories have been widely established for publications, and the motivations for deposit are well established. The situation is quite different for data, as we argue on the basis of a 5-year experience with research data management at the University of Porto. We address research data management from a disciplined yet flexible point of view, focusing on domain-specific metadata models embedded in intuitive tools, to make it easier for researchers to publish their datasets. We use preliminary data from a recent experiment in data publishing to identify motivators and deterrents for data publishing.

Cristina Ribeiro, João Rocha da Silva, João Aguiar Castro, Ricardo Carvalho Amorim, Paula Fortuna

Mobile and Social Computing for Collaborative Interactions (MSC) 2015: MSC 2015 PC Co-Chairs’ Message

Frontmatter
The mATHENA Inventory for Free Mobile Assistive Technology Applications

The entry of smartphones and tablets in the market yields new opportunities in the domain of Assistive Technology (AT) for persons with disabilities. The search process for mobile AT applications that fulfill specific user needs is not an easy task for the end-users, their facilitators as well as the professionals in the area of rehabilitation. Even, when they finally find what they are looking for, a number of questions are raised relative to the reliability, stability, compatibility and functionality of the AT applications. These questions can be answered safely only by a team of AT experts. In this work we present the methodological approach for the design and development of the mATHENA web-based inventory, which aims to make the search and selection of free mobile AT applications simple and sound. This methodology is based on the consistent and well-documented presentation of the information for each mobile AT application, after it is tested in an AT lab. mATHENA offers social interaction services for its diverse target groups. Moreover, we present the advantages of mATHENA compared with the functionalities of six other inventories for AT applications. Currently, mATHENA includes 420 free mobile AT applications, carefully selected among a total of 1,100.

Georgios Kouroupetroglou, Spyridon Kousidis, Paraskevi Riga, Alexandros Pino
Improving Social Collaborations in Virtual Learning Environments

This paper shows how to use social collaborations in virtual learning environments to motivate students in their learning process through recommendations of learning objects between peers. A hybrid recommender system is proposed and implemented in order to provide personalized recommendations for university students. An important contribution of this work is to show how to incorporate different recommendation approaches and techniques in order to produce useful recommendations in a real life scenario. Preliminary results indicate that information about groups of students as well as demographic information and previous evaluations of learning objects, processed with a combination of recommendation algorithms, show to be useful for the generation of personalized recommendations of learning objects.

Daniel González, Regina Motz, Libertad Tansini
A Platform for Measuring Susceptibility and Authority of Social Network Members

Social networks (SNs) offer the opportunity to define a new generation of services focused on users’ needs and based on the SN topology and the temporal evolution of available information about their members. In this context we propose a platform for measuring some individual features of members, i.e. authority and susceptibility. The platform leverages on a semantics-based approach to represent members’ interests and on a diffusion theory to model propagation of interests. Finally we present a case study concerning the American Physical Society (APS).

Gregorio D’Agostino, Antonio De Nicola
Multimodal Systems: An Excursus of the Main Research Questions

Multimodal systems use integrated multiple interaction modalities (e.g. speech, sketch, handwriting, etc.) enabling users to benefit of a communication more similar to the human-human communication. To develop multimodal systems, several research questions have been addressed in the literature from the early 80s till the present day, such as multimodal fusion, recognition, dialogue interpretation and disambiguation, fission, context adaptation, etc. This paper investigates studies developed in the last decade, by analyzing the evolution of the approaches applied to face the main research questions related to multimodal fusion, interpretation, and context adaptation. As result, the paper provides a discussion on the reasons that led to shift attention from one methodology to another.

Maria Chiara Caschera, Arianna D’Ulizia, Fernando Ferri, Patrizia Grifoni
Co-creativity Process by Social Media within the Product Development Process

Social media are computer-mediated communication tools frequently used for co-creating value because they promote communication, interaction and collaboration among all actors. This paper provides a description of the cocreativity process by Social Media within the product development process analysing the consumers’ involvement in the different steps (product design and production, product improvement before market introduction; product sale and product innovation).

Alessia D’Andrea, Fernando Ferri, Patrizia Grifoni, Tiziana Guzzo

Ontologies, DataBases, and Applications of Semantics (ODBASE) 2015 Posters: ODBASE 2015 PC Co-Chairs’ Message

Frontmatter
Towards Flexible Similarity Analysis of XML Data

The problem of supporting

similarity analysis of XML data

is a major problem in the

data fusion

research area. Several approaches have been proposed in literature, but

lack of flexibility

represents a hard challenge to be faced-off, especially in modern

Cloud Computing

environments. Inspired by this motivation, we propose

SemSynX

,

a novel technique for supporting similarity analysis of XML data via semantic and syntactic heterogeneity/homogeneity detection

.

SemSynX

retrieves several

similarity scores

over input XML documents, thus enabling flexible management and “customization” of similarity tools over XML data. In particular, the proposed technique is highly

customizable

, and it permits the specification of

thresholds

for the requested

degree of similarity

for paths and values as well as for the

degree of relevance

for path and value matching. Also,

selection of paths

and

semantics-based comparison of label content

are supported. It thus makes possible to “adjust” the similarity analysis depending on the nature of the input XML documents.

Jesús M. Almendros-Jiménez, Alfredo Cuzzocrea
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems: OTM 2015 Workshops
herausgegeben von
Ioana Ciuciu
Hervé Panetto
Christophe Debruyne
Alexis Aubry
Peter Bollen
Rafael Valencia-García
Alok Mishra
Anna Fensel
Fernando Ferri
Copyright-Jahr
2015
Electronic ISBN
978-3-319-26138-6
Print ISBN
978-3-319-26137-9
DOI
https://doi.org/10.1007/978-3-319-26138-6

Premium Partner