Skip to main content

2013 | Buch

On the Move to Meaningful Internet Systems: OTM 2013 Workshops

Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, ACM, EI2N, ISDE, META4eS, ORM, SeDeS, SINCOM, SMS, and SOMOCO 2013, Graz, Austria, September 9 - 13, 2013, Proceedings

herausgegeben von: Yan Tang Demey, Hervé Panetto

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume constitutes the refereed proceedings of the international workshops, Confederated International Workshops: OTM Academy, OTM Industry Case Studies Program, ACM, EI2N, ISDE, META4eS, ORM, SeDeS, SINCOM, SMS and SOMOCO 2013, held as part of OTM 2013 in Graz, Austria, in September 2013. The 75 revised full papers presented together with 12 posters and 5 keynotes were carefully reviewed and selected from a total of 131 submissions. The papers are organized in topical sections on: On The Move Academy; Industry Case Studies Program; Adaptive Case Management and other non-workflow approaches to BPM; Enterprise Integration, Interoperability and Networking; Information Systems in Distributed Environment; Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society; Fact-Oriented Modeling; Semantics and Decision Making; Social Media Semantics; Social and Mobile Computing for collaborative environments; cooperative information systems; Ontologies, Data Bases and Applications of Semantics.

Inhaltsverzeichnis

Frontmatter

On The Move Academy (OTMA) 2013

The 10th OnTheMove Academy Chairs’ Message

The term ‘academy’, originating from Greek antiquity, implies a strong mark of

quality and excellence in higher education and research

that is upheld by its members. In the ten past editions, the OTM Academy has yearly innovated its way of working to uphold its mark of quality and excellence. OTMA Ph.D. students students publish their work in a highly reputed publication channel, namely the Springer LNCS OTM workshops proceedings. The OTMA faculty members, who are well-respected researchers and practitioners, critically reflect on the students’ work in a positive and inspiring atmosphere, so that the students learn to improve not only their research capacities but also their presentation and writing skills. OTMA participants learn how to review scientific papers. They also enjoy ample possibilities to build and expand their professional network thanks to access to all OTM conferences and workshops. And last but not least, an ECTS credit certificate rewards their hard work.

Peter Spyns, Anja Metzner
On the Likelihood of an Equivalence

Co-references are traditionally used when integrating data from different datasets. This approach has various benefits such as fault tolerance, ease of integration and traceability of provenance; however, it often results in the problem of entity consolidation, i.e., of objectively stating whether all the co-references do really refer to the same entity; and, when this is the case, whether they all convey the same intended meaning. Relying on the sole presence of a single equivalence (owl:sameAs) statement is often problematic and sometimes may even cause serious troubles. It has been observed that to indicate the likelihood of an equivalence one could use a numerically weighted measure,

but the real hard questions of where precisely will these values come from arises.

To answer this question we propose a methodology based on a graph clustering algorithm.

Giovanni Bartolomeo, Stefano Salsano, Hugh Glaser
Development of Information Systems for Transparent Corporate Sustainability Using Data-Driven Technologies

Corporations, as influential players in a global environment, face increased pressure of the society and governments to assume responsibility for the consequences of their corporate actions. Although information technologies advance rapidly, data collection still heavily relies on manual input, static reports and to a broad extent the integration of stakeholders is not yet the general rule. Data-driven technologies like Participatory Sensing methods, Linked Open Data practices or Geographical Information Systems are useful methods to collect, aggregate and disseminate information. This paper outlines the problem scope, the solution approach and the research plan for my doctoral thesis which explores the potential of these technologies to improve the transparency of corporate sustainability using a design-science based approach. Experiences gained by designing and evaluating IT artifacts are expected to bring new insights about the relation of IT and corporate sustainability where existing research is still sparse.

Lisa Madlberger
Cooperative and Fast-Learning Information Extraction from Business Documents for Document Archiving

Automatic information extraction from scanned business documents is especially valuable in the application domain of document management and archiving. Although current solutions for document classification and extraction work pretty well, they still require a high effort of on-site configuration done by domain experts or administrators. Especially small office/home office (SOHO) users and private individuals often do not use such systems because of the need for configuration and long periods of training to reach acceptable extraction rates. Therefore we present a solution for information extraction out of scanned business documents that fits the requirements of these users. Our approach is highly adaptable to new document types and index fields and uses only a minimum of training documents to reach extraction rates comparable to related works and manual document indexing. By providing a cooperative extraction system, which allows sharing extraction knowledge between participants, we furthermore want to minimize the number of user feedback and increase the acceptance of such a system.

A first evaluation of our solution according to a document set of 12,500 documents with 10 commonly used fields shows competitive results above 85% F1-measure. Results above 75% F1-measure are already reached with a minimal training set of only one document per template.

Daniel Esser
Sustainable Supply Chain Management: Improved Prioritization of Auditing Social Factors, Leveraging Up-to-Date Information Technology

For companies, sustainability issues in the supply chain can cause severe problems and reputational damage. Especially social sustainability is of a problem (e.g. human rights) and has only been narrowly addressed in academic literature [1].

In business, the increased risk of issues in supply chains has amplified the need for the auditing of social responsibility at supplier locations [2]. As supply chains can include hundreds of suppliers, it seems impossible for companies to evaluate all factors in depth and even lesser in first-hand [3]. Because no interaction would require too much trust, a prioritized approach to the auditing of suppliers appears necessary.

The thesis addresses this prioritization problem of social sustainability auditing in international goods supply chains based on a design science approach to reduce risk and save costs [4]. Thus, the thesis’ core question is:

How can the prioritization of auditing activities of socially sustainable supply chain management be improved, leveraging up-to-date information technology?

Hence, a new prioritization software toolset could be an answer.

Andreas Thöni

Industry Case Studies Program 2013

Industry Case Studies Program 2013 PC Co-Chairs Message

Cloud computing, service-oriented architecture, business process modelling, enterprise architecture, enterprise integration, semantic interoperability-what is an enterprise systems administrator to do with the constant stream of industry hype surrounding him, constantly bathing him with (apparently) new ideas and new “technologies”? It is nearly impossible, and the academic literature does not help solving the problem, with hyped “technologies” catching on in the academic world just as easily as the industrial world. The most unfortunate thing is that these technologies are actually useful, and the press hype only hides that value. What the enterprise information manager really cares about is integrated, interoperable infrastructures that support cooperative information systems, so he can deliver valuable information to management in time to make correct decisions about the use and delivery of enterprise resources, whether those are raw materials for manufacturing, people to carry out key business processes, or the management of shipping choices for correct delivery to customers.

Hervé Panetto

Platforms and Frameworks

A Cooperative Social Platform to Elevate Cooperation to the Next Level – The Case of a Hotel Chain
Short Paper

This paper presents a cooperative social platform which was developed to accommodate the needs of a hotel chain. The web-based platform aims at interconnecting strategic functions that previously required the use of various tools that lacked interoperability. The platform encourages isolated managers of the hotel chain to interact and actively participate in the strategy formation and implementation. It provides novel functions that enhance the collective intelligence within the hotel chain, as well as enables the dynamic ‘sensing’ and ‘acting’ upon customer feedback. Results from the case study show that the platform enhanced bottom-up change and increased the pace of innovation, while the hotel chain became more cohesive and demonstrated evidence of ‘self-organization’.

Emmanuel Kaldis, Liping Zhao, Robert Snowdon
Applying BPM 2.0 in BPM Implementation Projects
Short Paper

Businesses today have to adapt their business process models to new market changes instantaneously. New social BPM approaches leverage collaborative Enterprise 2.0 methods and tools in order to empower the process participants to improve their business processes continuously. By enabling those who are running the processes to adapt “their” processes, businesses can respond to new market demands faster. A well-tested example of such an approach is BPM 2.0. This contribution reports about a BPM implementation project conducted at Eisen-Fischer GmbH, which uses BPM 2.0 as the underlying methodology. It provides insights in the potentials and limitations of social BPM approaches like BPM 2.0. As a major objective of the BPM implementation project is to retain the existing ISO 9001 certification, this report assists businesses in a similar situation to assess the potentials social BPM may provide for their individual situation.

Matthias Kurz, Manfred Scherer, Angelika Pößl, Jörg Purucker, Albert Fleischmann
Information Centric Engineering (ICE) Frameworks for Advanced Manufacturing Enterprises

As revolutionary advances in Next Generation Internet technologies continue to emerge, the impact on industrial manufacturing practices globally is expected to be substantial. Collaboration among enterprises is becoming more commonplace. The design of next generation frameworks based on Information Centric Engineering (ICE) principles holds the potential to facilitate rapid and flexible collaboration among global industrial partners. This paper outlines the key components of such a framework in the context of an emerging industrial domain related to micro devices assembly. Discussions of pilot demonstrations using Next Generation and Future Internet frameworks related to the GENI and US Ignite initiatives are also provided.

J. Cecil

Interoperability Case Studies and Issues

A Cross-Scale Models Interoperability Problem: The Plate-Form(E)3 Project Case Study
Short Paper

This paper presents the Plate-Form(E)3 project which is funded by the French National Research Agency (ANR) as a case study highlighting cross-scale models interoperability problems. This project involves public and private French organisations. It deals with the specification of a software platform for computing and optimising the energetic and environmental efficiency for industries and territories. Its aim is the integration of specialised tools for analysing sustainability of any processes. One important topic addressed by this project concerns the interoperability issues when interconnecting these tools (for modelling, simulating, and optimising) into the platform at different modelling scales (territory/plant/process/component). This paper proposes to highlight the interoperability issues led by the heterogeneity of the related tools in the energetic and environmental efficiency context.

Alexis Aubry, Jean Noël, Daniel Rahon, Hervé Panetto
Integrated Management of Automotive Product Heterogeneous Data: Application Case Study

The product development process (PDP) in innovative companies is becoming more and more complex, encompassing many diverse activities, and involving a big number of actors, spread across different professions, teams and organizations. One of the major problems is that development activities usually depend on many different inputs and influencing factors, and that the information that is needed in order to make the best possible decisions is either not documented or embodied in data that is spread over many different IT-systems. In this context a suitable knowledge management approach is required that must ensure the integration and availability of required information along with the federation and interoperability of the different Enterprise tools.

This paper presents a description of an industrial use case application where the innovative methodology researched in the iProd project is adopted. This methodology aims at improving the efficiency and quality of the Product Development Process (PDP) by means of a software ontological framework. The methodology is first briefly outlined and then its application is illustrated with an automotive application case developed together with Pininfarina. This case demonstrates how it is possible to capture and reuse the knowledge and design rationale of the PDP.

Massimo D’Auria, Marialuisa Sanseverino, Filippo Cappadona, Roberto D’Ippolito
iFeel_IM!: A Cyberspace System for Communication of Touch-Mediated Emotions

The paper focuses on a cyberspace communication system iFeel_IM!. Driven by the motivation to enhance social interactivity and emotionally immersive experience of real-time messaging, we proposed the idea of reinforcing (intensifying) own feelings and reproducing (simulating) the emotions felt by the partner through specially designed system iFeel_IM!. Users can not only exchange messages but also emotionally and physically feel the presence of the communication partner (e.g., family member, friend, or beloved person). The paper will also describe a novel portable affective haptic system iTouch_IM!. The motivation behind the research is to provide the emotional immersive communication regardless the location. This system has a potential to bring a new level of immersion for mobile on-line communication.

Dzmitry Tsetserukou, Alena Neviarouskaya, Kazuhiko Terashima
Interoperability Assessment Approaches for Enterprise and Public Administration
Short Paper

The need for collaboration among organizations is a reality with which systems, managers and other stakeholders must deal. But this is not an exclusive concern of private administrations, once the increasing need for information exchange among government agencies, the supply of online services to citizens, and the cost reduction of public operations and transactions demand that the government organizations must be ready to provide an adequate interface to their users. This paper presents basic concepts of Enterprise Interoperability, some assessment models and eGovernment practices, models, definitions and concepts, providing an initial analysis of their basic properties and similarities, verifying some possible gaps that may exist.

José Marcelo A. P. Cestari, Eduardo R. Loures, Eduardo Alves Portela Santos

Enterprise Systems

Supporting Interoperability for Chaotic and Complex Adaptive Enterprise Systems

Living systems like enterprises and enterprise networks not only want to survive but also want to grow and prosper. These organisational systems adapt to changing environments and proactively change the system itself. Continuous evolution (of parts) of these enterprise systems requires diversity, but this heterogeneity is a source of non interoperability. In this article, interoperability and enterprise systems are analysed through the lenses of chaos theory and complex adaptive systems theory and requirements for continuous adaptation and organisational learning to maintain interoperability are identified.

Georg Weichhart
System Integration for Intelligent Systems

The paper describes a model of system integration for designing intelligent systems. Systems for services and industries are expected to be coordinated internally and harmonized with the external environment and users. However, integration of the systems is difficult in general since the systems contain a number of processes with multiple tasks in different priority and timing. In this paper, we propose a model of system integration base on process categorization. We then validate the model based on dimension, hierarchy and symbolization comparing with case studies in the field of system intelligence. Discussion of the proposal framework will give us insight for designing interoperable infrastructures.

Ryo Saegusa

International Workshop on Adaptive Case Management and Other Non-workflow Approaches to BPM (ACM) 2013

ACM 2013 PC Co-Chairs Message

The sign of our time is the amazing speed with which changes in the business world happen. This requires from the enterprises of today, and even more of the future to become

agile

, e.g. capable of adjusting themselves to changes in the surrounding world. Agility requires focus being moved from

optimization

to

collaboration

and

creativity

. At the same time, current process thinking is continuing to be preoccupied with the issue of optimizing performance through standardization, specialization, and automation. A focus on optimization has resulted in the workflow view (in which a process is considered as a flow of operations) emerging as predominant in the field of Business Process Management (BPM). Besides requiring a long time to develop, predefined sequence of events in a workflow can reduce the creativity of people participating in the process and thereby result in poor performance.

Irina Rychkova, Ilia Bider, Keith Swenson

Experience Based Papers

Patterns Boosting Adaptivity in ACM

Adaptivity is the ability to adjust behavior to changes in context. In enterprise ACM, the need for control, uniformity, and analysis meet end-user’s needs to adapt to situations at hand. The interplay of different layers of declaratively defined templates can meet these needs in a flexible and sustainable manner. A data centric architecture with a library of process snippets building on standard activities and actions and a model of the organization with roles, provides an operational quality system. By combining this with a standardized declarative modeling of the case objects with codes and soft typing, we achieve a flexible platform for domain specific enterprise ACM systems. The result is a case- and goal-driven approach to adaptivity in ACM, taking into account knowledge workers’ autonomy and the information context of the work. This paper discusses means of achieving adaptivity in ACM solutions and presents examples of practical realization by these means.

Helle Frisak Sem, Thomas Bech Pettersen, Steinar Carlsen, Gunnar John Coll
CMMN Implementation in Executable Model of Business Process at Order-Based Manufacturing Enterprise

Agility - capability of an enterprise to function in the highly competitive and dynamic business environment. To survive and successfully develop companies should have flexible, adaptive business processes and management system that enforces the strategy and ensures achievement of target (commercial) goals. Case management is a paradigm for supporting flexible and knowledge intensive business processes. It is strongly based on data as the typical product of these processes. This paper presents implementation of this paradigm at the manufacturing enterprise based upon principles of CMMN emerging standard, declarative approach to business process modeling and the systems theory. The implementation uses first order logic (Prolog) and elements of lambda calculus. It has been in operation for more than 3 years.

Vadim Kuzin, Galina Kuzina
Process Analysis and Collective Behavior in Organizations: A Practitioner Experience

The analysis of organizational processes could support complexity and heterogeneity of companies if it is able to capture and organize the social behavior of enterprises. The phases - Orienteering, Modeling and Mapping – and the key characteristics of an analysis approach are described, focusing on results and returns. Results are the expected and committed outputs of the analysis process, such as the modeling of workflow processes. Returns are the impacts of the process analysis on individual and social behavior, such as awareness and motivation. Returns support the definition of the key assets of an organization, influence the modeling of workflow processes and are useful to capture the non-workflow components of organizational processes. This approach has been applied through a long-term activity in public and private enterprises. An experience is described presenting strong and weak points of the approach, differences and similarities, in particular between workflow and non-workflow processes.

Paola Mauri

Research and New Theoretical Ideas

Supervision of Constraint-Based Processes: A Declarative Perspective

Constraint-based processes require a set of rules that limit their behavior to certain boundaries. Declarative languages are better suited to modeling these processes precisely because they facilitate the declaration of little or no business rules.These languages define the activities that must be performed to produce the expected results but not define exactly how these activities should be performed. The present paper proposes a new approach to deal with constraint-based processes. The proposed approach is based on supervisory control theory, a formal foundation for building controllers for discrete-event systems.The controller proposed in this paper monitors and restricts execution sequences of tasks such that constraints are always obeyed. We demonstrate that our approach can be used as a declarative language for constraint-based processes.

Sauro Schaidt, Eduardo de Freitas Rocha Loures, Agnelo Denis Vieira, Eduardo Alves Portela Santos
Dynamic Context Modeling for Agile Case Management

Case Management processes are characterized by their high unpredictability and, thus, cannot be handled following traditional process- or activity-centered approaches. Adaptive Case Management paradigm proposes an alternative data-centered approach for management such processes. In this paper, we elaborate on this approach and explore the role of context data in Case Management. We use the state-oriented representation of the process that allows us to incorporate the contextual information in a systematic and transparent way, leading towards agile case management.

Manuele Kirsch-Pinheiro, Irina Rychkova
Adaptive Case Management as a Process of Construction of and Movement in a State Space

Despite having a number of years of experience, adaptive case management (ACM) still does not have a theory that would differentiate it from other paradigms of business process management and support. The known attempts to formalize Case Management do not seem to help much in creating an approach that could be useful in practice. This paper suggests an approach to building such a theory based on generalization of what is used in practice on one hand and the state-oriented view on business processes on the other. In practice, ACM systems use a number of ready-made templates that are picked up and filled as necessary for the case. State-oriented view considers a process instance/case as a point moving in a specially constructed state space. This paper suggests considering a case template as a definition of a sub-space and piking different template on the fly as constructing the state space along with moving in it when filling the template. The result is similar to what in control-flow based theories are considered as a state space with variable numbers of dimensions. Beside suggestions to building a theory, the paper demonstrates the usage of the theory on an example.

Ilia Bider, Amin Jalali, Jens Ohlsson

Position Papers

Dynamic Condition Response Graphs for Trustworthy Adaptive Case Management

By trustworthy adaptive case management we mean that it should be possible to adapt processes and goals at runtime while guaranteeing that no deadlocks and livelocks are introduced. We propose to support this by applying a formal declarative process model, DCR Graphs, and exemplify its operational semantics that supports both run time changes and formal verification. We show how these techniques are being implemented in industry as a component of the Exformatics case management tools. Finally we discuss the planned future work, which will aim to allow changes to be tested for conformance wrt policies specified either as linear time logic (LTL) or DCR Graphs, extend the language with time and data and offer extended support for cross-organizational case management systems.

Thomas Hildebrandt, Morten Marquard, Raghava Rao Mukkamala, Tijs Slaats
Setup and Maintenance Factors of ACM Systems

Adaptive Case Management (ACM) is information technology for the secure and transparent management of structured and unstructured business processes, consisting of data, content, related work tasks and rules executed towards well-defined process goals. Thus, it goes beyond combining benefits of workflow diagrams with ad-hoc task mechanisms. One of the notorious weaknesses of classical workflow technology is the experts’ effort for getting a sufficiently complete specification of the process to create an executable which typically takes several months. In contrast, ACM provides goal-oriented mechanisms to enable performers to define and execute work tasks ad-hoc. In this paper, based on the definition of the ACM concepts, we analyze which setup steps have to be conducted for an ACM system in a typical scenario from the service industry. Our contribution is an identification of major factors that influence the setup initiated by experts and the maintenance performed by business users.

Thanh Thi Kim Tran, Max J. Pucher, Jan Mendling, Christoph Ruhsam

Workshop on Enterprise Integration, Interoperability and Networking (EI2N) 2013

EI2N 2013 PC Co-Chairs Message

After the successful seventh edition in 2012, the eight edition of the Enterprise Integration, Interoperability and Networking workshop (EI2N’2013) has been organized as part of the OTM’2013 Federated Conferences in Graz, Austria. The workshop is co-sponsored by the IFAC Technical Committee 5.3 ”Enterprise Integration and Networking” (main sponsor) and IFAC TC 3.1, 3.2, 3.3, 5.2, 5.4 and 9.5, the IFIP TC 8WG 8.1 ”Design and Evaluation of Information Systems”, the SIG INTEROP Grande-Région on ”Enterprise Systems Interoperability”, and the French CNRS National Research Group GDR MACS.

Alexis Aubry, J. Cecil, Kasuhiko Terashima, Georg Weichhart

Architectures and Platforms

Towards a Framework for Inter-Enterprise Architecture to Boost Collaborative Networks

A complete Inter-Enterprise Architecture should be conformed to a framework, a methodology and a modelling language. In this sense, this paper proposes an initial Framework for Inter-Enterprise Architecture (FIEA), which organizes, stores, classifies and communicates in a conceptual level the elements of the Inter-Enterprise Architecture (IEA) and their relationships, ensuring their consistency and integrity. This FIEA provides a clear picture about the elements and perspectives that make up the collaborative network and their inter-relationships, supported for technology base on the Internet for its inter-operation.

Alix Vargas, Andrés Boza, Llanos Cuenca, Angel Ortiz
A Modelling Approach to Support Enterprise Architecture Interoperability

In this paper, we elaborate on the Enterprise Architecture (EA) and how it can be improved by using Enterprise Interoperability (EI) requirements. We describe how enterprise interoperability is related to EA concepts by especially analysing the definition and the foundation of interoperability, highlighting the relationships with EA. We then propose a conceptual model that defines the enterprise architecture interoperability domain. In doing so, conceptual descriptions of systems, interoperability problems and solutions are identified. The designed model can be inferred for decision-aid related to interoperability. Finally, we illustrate our approach by means of a case in the automotive industry.

Wided Guédria, Khaled Gaaloul, Yannick Naudet, Henderik A. Proper
OptiVE: An Interactive Platform for the Design and Analysis of Virtual Enterprises

OptiVE is a platform for designing and analyzing virtual enterprises. OptiVE has two modes of operations. In

composition

mode, business entrepreneurs can define elementary enterprises as well as create complex virtual enterprises from enterprises already registered in the system. In

analysis

mode, business analysts can explore the structure and properties of registered enterprises and ask the system to optimize them: Find a particular combination of participants and a specific production path that will deliver the best outcome (produce the target product at the lowest overall cost). OptiVE is founded on a formal model with rigorous syntax and semantics and uses mixed integer linear programming (MILP) to find its optimal solutions. A prototype implementation of OptiVE is also described. The system insulates its users from technical details, offering an intuitive graphical user interface for operating in either mode.

Yun Guo, Alexander Brodsky, Amihai Motro

Engineering and Modelling

Dynamic Generation of Personalized Product Bundles in Enterprise Networks

Product bundling is a marketing strategy that concerns offering several products for sale as one combined product. Current technology mainly focuses on the creation of static bundles, which involves pre-computing product bundles and associated discounts. However, due to the inherent dynamism and constant change of current and potential customer information, as is particularly the case in enterprise networks, static product bundles prove to be inefficient. In this paper an approach for dynamic generation of personalized product bundles using agents is proposed. Our approach involves creating bundles based on substitution and complementarity associations between product items, and subsequently ranking the produced bundles according to individual preferences and history of each customer. The proposed approach has been implemented in e-Furniture, an agent-based system supporting networking of furniture and wood product manufacturing enterprises.

Anthony Karageorgos, Elli Rapti
An Information Model for Designing Virtual Environments for Orthopedic Surgery

In this paper, the role of an information model in the design of virtual environments for orthopedic surgery is discussed. The engineering Enterprise Modeling Language (eEML) is used to model the relationships and precedence constraints in a specific set of orthopedic surgery activities. Our model focuses on a process referred to as LISS plating surgery (LISS- Less Invasive Stabilization System). This information model serves as a basis to develop two orthopedic surgical environments: a non-immersive virtual reality based environment and a haptic interface equipped virtual environment. These virtual environments can be used for training medical residents in specific orthopedic surgical processes.

J. Cecil, Miguel Pirela-Cruz
Process Integration and Design Optimization Ontologies for Next Generation Engineering

The recent years have seen a significant increase in ontology research in the field of engineering due to the needs of domain knowledge capturing, reusing and sharing between software agents and/or engineers.

The increasingly available enhancements of computing resources have considerably increased the role of Computer-aided engineering applications in the product development process by saving time and costs. In this context a key role is played by the Process Integration and Design Optimization tools that facilitate and automate the integration and interoperability of all different Enterprise Applications involved in the simulation of engineering tasks. However, all these tools inherit a severe limitation about the semantic meaning of the tools they automate.

This paper proposes a platform independent Process Integration and Design Optimization (PIDO) ontology that aims at providing an extensive mapping of the PIDO domain, with a special attention to its adoption in software applications. A comparison with existing similar ontologies and an explanation of the reasons that lay behind the classes and relations introduced is provided. Finally, a real application case is illustrated along with the related main benefits.

Massimo D’Auria, Roberto D’Ippolito

Sustainable Interoperability and Interoperability for Sustainability

Modelling a Sustainable Cooperative Healthcare: An Interoperability-Driven Approach

Modern healthcare is confronted with serious issues that are threatening its viability and sustainability. Increasing costs and complexity, global population ageing and pandemics are major culprits of the healthcare quandary. In this context, effective interoperability of the participants in the healthcare effort becomes paramount. However, this is also a major challenge as unfortunately, healthcare institutions typically feature strong hierarchy and heterogeneity. As the pressure on healthcare resources and management cost is constantly increasing, governments can no longer rely on outdated ‘silo’ paradigms for managing population wellbeing. New cooperative and integrated models and procedures taking into account all essential cooperation aspects, elements, participants and their life cycle are necessary to drive cooperative healthcare sustainability. Based on previous research and applications, this paper argues that the necessary artefacts can be built using a life cycle-based, holistic paradigm enabled by advances in Interoperability, Enterprise Architecture and Collaborative Networks research and practice. The proposed modelling approach aims to provide a solid base for sustainable solutions to long and short-term challenges to population health and well-being.

Ovidiu Noran, Hervé Panetto
Sustainability and Interoperability: Two Facets of the Same Gold Medal

’To sustain is to endure’ - that is, to be able to survive and continue to function in the face of significant changes. The commonly accepted concept of ’sustainability’ currently encompasses three main pillars: environmental, social/ethical and economic. In a metaphor of survival, they can be seen as water, food and air; one needs all three, only with varying degrees of urgency. In today’s globally networked environment, it is becoming obvious that one cannot achieve environmental, social or economic sustainability of any artefact (be it physical or virtual, e.g. enterprise, project, information system, policy, etc) without achieving ubiquitous ability of the artefact and its creators and users to exchange and understand shared information and if necessary perform processes on behalf of each other - capabilities that are usually defined as ’interoperability’. Thus, sustainability relies on interoperability, while, conversely, interoperability as an ongoing concern relies for its existence on all three main pillars of sustainability. This paper aims to test the hypothesis that interoperability and sustainability are two inseparable and inherently linked aspects of any universe of discourse. To achieve this, it applies the dualistic sustainability / interoperability viewpoint to a variety of areas (manufacturing, healthcare, information and communication technology and standardisation), analyses the results and synthesizes conclusions and guidelines for future work.

Michele Dassisti, Ricardo Jardim-Goncalves, Arturo Molina, Ovidiu Noran, Hervé Panetto, Milan M. Zdravković

International Workshop on Information Systems in Distributed Environment (ISDE) 2013

ISDE 2013 PC Co-Chairs Message

Information System in Distributed Environment (ISDE) is swiftly becoming a prominent standard in this globalization generation due to advancement in information and communication technologies. In distributed environments, business units collaborate across time zones, organizational boundaries, work cultures and geographical distances, to an increasing diversification and growing complexity of cooperation among units. The main expected benefits from Distributed Software Development (DSD) are improvements in development time efficiency, being close to the customers and having flexible access to greater and less costly resources. Despite the fact that DSD is widely being used, the project managers and professional face many challenges due to increased complexity, cultural as well as various technological issues. Therefore, it is crucial to understand current research and practices in these areas.

Alok Mishra, Jürgen Münch, Deepti Mishra

Architecture, Knowledge Management and Process in Distributed Information System

Distributed Software Development with Knowledge Experience Packages

In software production process, a lot of knowledge is created and remain silent. Therefore, it cannot be reused to improve the effectiveness and the efficiency of these processes. This problem is amplified in the case of a distributed production. In fact, distributed software development requires complex context specific knowledge regarding the particularities of different technologies, the potential of existing software, the needs and expectations of the users. This knowledge, which is gained during the project execution, is usually tacit and is completely lost by the company when the production is completed. Moreover, each time a new production unit is hired, despite the diversity of culture and capacity of people, it is necessary to standardize the working skills and methods of the different teams if the company wants to keep the quality level of processes and products. In this context, we used the concept of Knowledge Experience Package (KEP), already specified in previous works and the tool realized to support KEP approach. In this work, we have carried out an experiment in an industrial context in which we compared the software development supported by KEPs with the development achieved without it.

Pasquale Ardimento, Marta Cimitile, Giuseppe Visaggio
Resolving Platform Specific Models at Runtime Using an MDE-Based Trading Approach

Dynamic service composition provides versatility and flexibility features for those component-based software systems which need to self-adapt themselves at runtime. For this purpose, services must be located in repositories from which they will be selected to compose the final software architecture. Modern component-based software engineering and model-driven engineering techniques are being used in this field to design the repositories and the other elements (such as component specifications), and to implement the processes which manage them at runtime. In this article, we present an approach for the runtime generation of Platform Specific Models (PSMs) from abstract definitions contained in their corresponding Platform Independent Models (PIMs). The process of generating the PSM models is inspired by the selection processes of Commercial Off-The-Shelf (COTS) components, but incorporating a heuristic for ranking the architectural configurations. This trading process has been applied in the domain of component-based graphical user interfaces that need to be reconfigured at runtime.

Javier Criado, Luis Iribarne, Nicolás Padilla
Software Architecture in Distributed Software Development: A Review

This paper presents a literature review of distributed software development (DSD) or global software development (GSD) and software architecture. The main focus is to highlight the current researches, observations, as well as practice directions in these areas. The results have been limited to peer-reviewed conference papers and journal articles, and analysis reports that major studies have been performed in software architecture and global software development, while the empirical studies of interfacing distributed/global software development and software architecture has only received very little attention among researchers up to now. This indicates the need for future research in these areas.

Alok Mishra, Deepti Mishra

Quality Management in Distributed Information System

Run-Time Root Cause Analysis in Adaptive Distributed Systems

In a distributed environment, several components collaborate with each other to cater a complex functionality. Adaptation in distributed systems is one of the emerging trends that re-configures itself through components addition/removal/update, to cope up with faults. Components are generally inter-dependent, thus a fault propagates from one component to another. Existing root cause analysis techniques generally create a static faults’ dependencies graph to identify the root fault. However, these dependencies keep on changing with adaptations that makes design-time fault dependencies invalid at run-time. This paper describes the problem of deriving causal relationships of faults in adaptive distributed systems. Then, presents a statechart-based solution that statically identifies the sequence of methods execution to derive the causal relationships of faults at run-time. The approach is evaluated, and found that it is highly scalable and time efficient that can be used to reduce the Mean Time To Recover (MTTR) of a distributed system.

Amit Raj, Stephen Barrett, Siobhan Clarke
Global Software Development and Quality Management: A Systematic Review

This paper presents a systematic literature review of global software development (GSD) and quality management aspects. The main focus is to highlight the current research and practice direction in these areas. The results have been limited to peer-reviewed conference papers and journal articles, published between 2000 and 2011. The analysis reports that major studies have been performed in quality and process management, while verification and validation issues of GSD can only get limited attention among researchers. This indicates the need for future research (quantitative and qualitative) in these areas.

Deepti Mishra, Alok Mishra, Ricardo Colomo-Palacios, Cristina Casado-Lumbreras

Distributed Information Systems Applications

Building a Concept Solution for Upgrade Planning in the Automation Industry

Industrial automation systems are long living systems controlling industrial processes such as power stations or pulp and paper production. These systems have strict requirements on the system’s availability since all downtime is costly for factories. In such circumstances, all upgrades require special concern and planning, in a context of collaboration between the automation system’s provider and user, to minimize downtimes in the user’s critical processes. This paper discusses the problem of upgrade planning for such automation systems. It presents a concept solution based on a case study. The research is a part of broader research aiming at a better understanding of system upgrades in the case study company’s service sales. The aim is also to enhance solutions for handling the identification and analysis of upgrades in collaboration between the case study company’s internal teams and customers.

Jukka Kääriäinen, Susanna Teppola, Antti Välimäki
Inconsistency-Tolerant Business Rules in Distributed Information Systems

Business rules enhance the integrity of information systems. However, their maintenance does not scale up easily to distributed systems with concurrent transactions. To a large extent, that is due to two problematic exigencies: the postulates of

total

and

isolated

business rule satisfaction. For overcoming these problems, we outline a measure-based inconsistency-tolerant approach to business rules maintenance.

Hendrik Decker, Francesc D. Muñoz-Escoí
Agent-Oriented Software Engineering of Distributed eHealth Systems

Development of distributed ehealth systems is increasingly becoming a common necessity to work across organisations to provide efficient services. This requires healthcare information to be accessible, under appropriate safeguards, for research or healthcare. However, the progress relies on the interoperability of local healthcare software, and is often hampered by ad hoc development methods leading to closed systems with a multitude of protocols, terminologies, and design approaches. The ehealth domain, by requirements, includes autonomous organisations and individuals, e.g. patients and doctors, which would make AOSE a good approach to developing systems that are potentially more open yet retain more local control and autonomy. The paper presents the use of AOSE to develop a particular distributed ehealth system, IDEA, and evaluates its suitability to develop such systems in general.

Adel Taweel, Emilia Garcia, Simon Miles, Michael Luck

Workshop on Methods, Evaluation, Tools and Applications for the Creation and Consumption of Structured Data for the e-Society (META4eS) 2013

Meta4eS 2013 PC Co-Chairs Message

The future eSociety - renamed ”OnTheMoveSociety” in the context of Meta4eS - is an e-inclusive society based on the extensive use of digital technologies at all levels of interaction between its members. It is a society that evolves based on knowledge and that empowers individuals by creating virtual communities that benefit from social inclusion, access to information, enhanced interaction, participation and freedom of expression, among other.

Ioana Ciuciu, Anna Fensel

Business Process Models and Semantics for the e-Society

Towards the Integration of Ontologies with Service Choreographies

This paper discusses the integration of ontologies with service choreographies in view of recommending interest points to the modeler for model improvement. The concept is based on an ontology of recommendations (evaluated by metrics) attached to the elements of the model. The ontology and an associated knowledge base are used in order to extract correct recommendations (specified as textual annotations attached to the model) and present them to the modeler. Recommendations may result in model improvements. The recommendations rely on similarity measures between the captured modeler design intention and the knowledge stored in the ontology and knowledge bases.

Mario Cortes-Cornax, Ioana Ciuciu, Sophie Dupuy-Chessa, Dominique Rieu, Agnès Front
Compliance Check in Semantic Business Process Management

With a steady increase of requirements against business processes, support of compliance checking is a field having increased attention in information systems research and practice. Compliance check is vital for organizations to identify gaps, inconsistency and incompleteness in processes and sometimes it is mandatory because of legal, audit requirements. The paper gives an overview about our research and development activities in the field of compliance checking with the help of semantic business process management (SBPM). We propose a compliance checking approach and solution, illustrated with a use case from higher education domain.

András Gábor, Andrea Kő, Ildikó Szabó, Katalin Ternai, Krisztián Varga
Dynamic, Behavior-Based User Profiling Using Semantic Web Technologies in a Big Data Context

The success of shaping the e-society is crucially dependent on how well technology adapts to the needs of each single user. A thorough understanding of one’s personality, interests, and social connections facilitate the integration of ICT solutions into one’s everyday life. The MindMinings project aims to build an advanced user profile, based on the automatic processing of a user’s navigation traces on the Web. Given the various needs underpinned by our goal (e.g. integration of heterogeneous sources and automatic content extraction), we have selected Semantic Web technologies for their capacity to deliver machine-processable information. Indeed, we have to deal with web-based information known to be highly heterogeneous. Using descriptive languages such as OWL for managing the information contained in Web documents, we allow an automatic analysis, processing and exploitation of the related knowledge. Moreover, we use semantic technology in addition to machine learning techniques, in order to build a very expressive user profile model, including not only isolated “drops” of information, but inter-connected and machine-interpretable information. All developed methods are applied to a concrete industrial need: the analysis of user navigation on the Web to deduct patterns for content recommendation.

Anett Hoppe, Ana Roxin, Christophe Nicolle
A Virtual Organization Modeling Approach for Home Care Services

We follow a Virtual Organization (VO) approach for Requirements Engineering (RE) to define and describe the collaborative models needed in a French home care scenario. We use the Intentional level of abstraction for building the models to define the alliance, collaboration and objectives. In this level we identify the intra, inter and extra relationships between the organizations. The approach is illustrated in the context of a French regional project looking for innovative ideas for the care of fragile people within legal, medical, technical and volunteering concerns. Our goal is to facilitate iterative modeling taking into account all organizations’ points of view and manage complexity with a top-down refinement.

Luz-Maria Priego-Roche, Christine Verdier, Agnès Front, Dominique Rieu

Structured Data Consumption for the e-Society

Querying Brussels Spatiotemporal Linked Open Data

The “Open Semantic Cloud for Brussels” (OSCB) project aims at building a platform for linked open data for the Brussels region in Belgium, such that participants can easily publish their data, which can in turn be queried by end users using a web browser to access a SPARQL endpoint. If data are spatial and we want to show them on a map, we need to support this endpoint with an engine that can manage spatial data. For this we chose Strabon, an open source geospatial database management system that stores linked geospatial data expressed in the stRDF format (spatiotemporal RDF) and queries them using stSPARQL (spatiotemporal SPARQL), an extension to SPARQL 1.1. In this paper we show how the SPARQL endpoint is built and the kinds of queries it supports, also providing a wide variety of examples.

Kevin Chentout, Alejandro Vaisman
Towards Standardized Integration of Images in the Cloud of Linked Data

Currently, there are several ways of describing and referring to images in RDF. This ambiguity results in a proliferation of vocabularies for image descriptions, complicating the cross-community data integration of information related to digital images. In addition, there are no standardized guidelines on how to integrate RDF data into the image metadata. Therefore, the JPEG standardization committee has recently initiated some activities to streamline the integration of images in the cloud of Linked Data. One effort is the standardization of an ontology for describing digital images. JPEG aims at providing a technology that enables the uniform description of photos and videos with technical, administrative and semantic metadata compliant with the RDF specification and the principles of Linked Data. Secondly, specifications to integrate RDF data into JPEG images are elaborated. Finally, since descriptions often only apply to a certain part of an image, a last effort is to formalize the specification of regions of interest. In this paper, members of the JPEG committee provide a detailed overview of these ongoing activities.

Ruben Tous, Jaime Delgado, Frederik Temmermans, Bart Jansen, Peter Schelkens
Adding Spatial Support to R2RML Mappings

The “Open Semantic Cloud for Brussels” (OSCB) project aims at building a platform for linked open data for the Brussels region in Belgium, such that participants can easily publish their data. In OSCB, data providers deliver their data in the form of relational tables or XML documents. These data are mapped to RDF triples using the R2RML mapping language. Since OSCB data are spatiotemporal in nature, we needed to adapt R2RML to be able to produce spatiotemporal linked open data in order to build to a spatial data-enabled SPARQL endpoint where the result of spatiotemporal SPARQL queries can be shown on a map. In this paper we show how we achieved this goal.

Kevin Chentout, Alejandro Vaisman
FRAGOLA: Fabulous RAnking of GastrOnomy LocAtions

Nowadays, large open datasets are frequently accessed to select, for example, restaurants that best meet gastronomy criteria and are closer to their current geo-spatial locations. We have developed a skyline-based ranking approach named FOPA, which is able to efficiently rank resources that fullfil this type of multi-objective queries. As a proof of concept, we developed FRAGOLA (

F

abulous

RA

nking of

G

astr

O

nomy

L

oc

A

tions), a tool that implements FOPA and ranks gastronomy locations based on multi-objective criteria. We will demonstrate FRAGOLA, and attendees will observe scenarios where FOPA overcomes performance of existing skyline-based approaches by up to two orders of magnitude.

Ana Alvarado, Oriana Baldizán, Marlene Goncalves, María-Esther Vidal

Ontology Evolution and Evaluation

User Satisfaction of a Hybrid Ontology-Engineering Tool

In an effort to continuously improve a research prototype for collaborative ontology engineering, we report on the reapplication of a usability test within an ontology-engineering experiment involving 36 users. The tool offers additional functionalities and measures were taken to address the problems identified in a previous study. The evaluation criteria proposed in the study were developed by taking into account the people involved, the processes and their outcomes, focusing on the user experience, in an approach that goes beyond usability; users were asked if the tool helped them in achieving their goals. We identify the problems the users encountered while using the system and also investigate whether the measures did tackle the problems observed in the first study. A set of recommendations is proposed in order to overcome these new problems and to improve the user experience with the system.

Christophe Debruyne, Ioana Ciuciu
Towards a Hierarchy in Domain Ontologies

This paper defines a language for modeling ontologies in business applications and application services. This language consists of the modeling constructs from Natural Language Modeling (NLM). It will be shown in this article how the application of this modeling language will enable us to model a hierarchy for a domain ontology.

Peter Bollen
Semantic Oriented Data Structuration for MABS Application to BIM
Short Paper

This paper presents a multiagent-based simulation approach to qualify the usage of buildings from the design phase. Our approach combines ontology and evolution process based on machine learning algorithms. The ontology relies on semantic data structures for the representation of environment components, agent knowledge and all data generated during the simulation.

Thomas Durif, Christophe Nicolle, Nicolas Gaud, Stéphane Galland
Towards Bottom Up Semantic Services Definition

This paper explores a bottom up approach to support service interoperability between distinct stakeholders. Bottom up approaches are useful in ecosystems where coming up with semantic agreements is difficult. Key here is the flexibility for the distinct stakeholders to diverge and converge by means of agreement in their service definitions, driven in an emergent dynamic process that eventually may lead to a stable service network using a bottom up approach.

Cristian Vasquez
A Structural $\mathcal{ SHOIN(D)}$ Ontology Model for Change Modelling

This paper presents a complete structural ontology model suited for change modelling on

$\mathcal{ SHOIN(D)}$

ontologies. The application of this model is illustrated along the paper through the description of an ontology example inspired by the UOBM ontology benchmark and its evolution.

Perrine Pittet, Christophe Cruz, Christophe Nicolle

Workshop on Fact-Oriented Modeling (ORM) 2013

ORM 2013 PC Co-Chairs Message

Following successful workshops in Cyprus (2005), France (2006), Portugal (2007), Mexico (2008), Portugal (2009), and Greece (2010 and 2011), and Rome (2012), this is the ninth fact-oriented modeling workshop run in conjunction with the OTM conferences. Fact-oriented modeling is a conceptual approach for modeling and querying the semantics of business domains in terms of the underlying facts of interest, where all facts and rules may be verbalized in language readily understandable by users in those domains.

Terry Halpin

ORM Formal Foundations and Transforms

Towards a Core ORM2 Language (Research Note)

The introduction of a provably correct encoding of a fragment of

ORM2

(called

ORM2

zero

) into a decidable fragment of

OWL2

, opened the doors for the definition of dedicated reasoning technologies supporting the quality of the schemas design. In this paper we discuss how to extend

ORM2

zero

in a maximal way by retaining at the same time the nice computational properties of

ORM2

zero

.

Enrico Franconi, Alessandro Mosca
Reference Scheme Reduction on Subtypes in ORM

Object-Role Modeling (ORM) allows composite reference schemes for object types to be portrayed using either objectification (in the sense of situational nominalization) or coreference (as defined in ORM rather than linguistics). In practical modeling, cases can arise where a subtype of a compositely identified object type has a natural reference scheme that utilizes only some components of the supertype’s reference scheme. Using the supertype’s reference scheme to verbalize facts for the subtype then leads to redundancy or other irrelevance in the verbalization. Moreover, if such cases are input directly to the ORM’s standard relational mapping procedure (Rmap), this can lead to table schemes that are not fully normalized. The paper identifies ways in which such problems can arise, and proposes ways to avoid these problems, partly by extending earlier work on reference scheme reduction, role redirection, and disjunctive reference, illustrating the approach with some practical examples.

Andy Carver, Terry Halpin

Extensions to ORM

Recent Enhancements to ORM

Fact-oriented modeling approaches such as Object-Role Modeling (ORM) employ rich graphical notations for capturing business constraints, and validate their data models with domain experts by verbalizing the models in natural language, and by populating the relevant fact types with concrete examples. This paper discusses several recent enhancements to ORM, including the following: further constraints on supertype link roles and their relevance to restricted mandatory role constraints; inclusive-or constraints on roles hosted by different types; refinements to the concept of independent object types; additional kinds of reference schemes and associated uniqueness constraints; and verbalization of further constraint cases involving subtyping, additional reference scheme patterns, uniqueness and frequency constraints involving unaries, and external uniqueness constraints involving

n

-ary fact types. The paper also includes some discussion of how these enhancements have been supported, or are soon to be supported, in the Natural ORM Architect (NORMA) tool.

Terry Halpin, Matthew Curland
Including Spatial/Temporal Objects in ORM

We suggest to include spatial entities and corresponding spatial values as first class citizens in fact oriented models using ORM. A spatial entity is just a piece of some space. A spatial value is seen as a possibly infinite set of positions in some coordinate system. This makes the conceptual model very independent of the implementation platform. On this basis, we suggest a notation for spatial values. Further, we investigate some rather useful spatial constraints; the no overlap constraint and the no overlap exclusion constraint, which turn out to be spesializations of the uniqueness and the exclusion constraints. Finally, we discuss some possible implementation platforms.

Gerhard Skagestein, Håvard Tveite

Demonstrations of Fact-Oriented Modeling Tools

NORMA Multi-page Relational View

The NORMA (Natural Object-Role Modeling Architect) tool has long supported a single-page view of the generated relational model. This view automatically shows one shape per table. Unfortunately, as the number of tables and foreign keys grows, this single-page approach becomes impractical. Therefore, for advanced users wishing to display readable relational models, it is necessary to move beyond this single-page approach. The NORMA

multi-page relational view

was created to address these concerns by allowing multiple relational diagrams in the same model. However, the multi-page environment has inherent issues that do not occur with a single-page approach. This paper discusses how we handle shape creation for tables and foreign keys with no corresponding display shape, how we display foreign keys when the referenced table is not on the diagram, and how column associations are displayed for foreign keys. We also discuss the display options used to balance explicit display of column information and complexity of the connecting lines. The result is a flexible system for rapid population of targeted relational diagrams that scales seamlessly for large models.

Matthew Curland
Metis: The SDRule-L Modelling Tool

Semantic Decision Rule Language (SDRule-L), which is an extension to Object-Role Modelling language (ORM), is designed for modelling semantic decision support rules. An SDRule-L model may contain

static

(e.g., data constraints) and

dynamic

rules (e.g., sequence of events). In this paper, we want to illustrate its supporting tool called Metis, with which we can graphically design SDRule-L models, verbalize and reason them. We can store and publish those models in its markup language called SDRule-ML, which can be partly mapped into OWL2. The embedded reasoning engine from Metis is used to check consistency.

Yan Tang Demey, Christophe Debruyne

Combining Data, Process and Service Modeling

Mapping BPMN Process Models to Data Models in ORM

Business processes define workflow dependencies inside an industry and/or organization. Business processes drive machines and people, and use business datato function properly. By systematically integrating data and processes, we can understand and assess complete business processes from beginning to end. Practice, however, often reveals that there is no systematic link between a business process and associated business data. The aim of this paper to tackle some of the problems encountered in deriving business data from process models. We will show how to systematically map basic business process models using Business Process Modeling Notation (BPMN) to data models specified in ORM. From the resulting ORM model, we can generate a complete (corporate) relational database, containing the business data that is tailor-made to support the business process.

Herman Balsters
Elementary Transactions

Designing the

data

perspective of an information system has benefited greatly from modeling at the conceptual level. From such a model, logical data structures (ERM, Relational, DWH) can be generated automatically. In this paper, we describe how to generate the elementary building blocks of the

process

perspective from the conceptual data level. Our goal is to derive a complete set of elementary transactions from the elementary fact types and constraints at the conceptual level. Definitions of the required concepts and rules for their basic behavior are given.

Jan Pieter Zwart, Stijn Hoppenbrouwers
Comparative Analysis of SOA and Cloud Computing Architectures Using Fact Based Modeling

With the ever-changing dynamic Information and Communications Technology environment and the new shared deployment options for computing, a paradigm shift is occurring that enables ubiquitous and convenient computing on a pay-as-you-go basis. Access on demand is becoming available to networks of scalable, elastic, self-serviceable, configurable physical and virtual resources. On a more narrowly focused IT and business front, there is a parallel shift towards designing information systems in terms of the services available at an interface. The Service Oriented Architecture (SOA) development style is based on the design of services and processes and the realization of interoperability and location transparency in context-specific implementations. This paper analyzes the Cloud Computing and SOA Reference Architectures being developed by ISO ISO/IEC JTC1 SC38 (in collaboration with ITU-T SG13/WP6 for Cloud Computing), and offers a concept comparison using Fact Based Modeling (FBM) methodology. FBM has allowed us to distill the concepts, relationships and business rules - thereby exposing the strengths and weakness of each, and identifying the gaps between the two.

Baba Piprani, Don Sheppard, Abbie Barbir

Conceptual Query and Verbalization Languages

A Simple Metamodel for Fact-Based Queries

Fact-based models are built by expressing elementary facts using natural verbalizations. By generalizing individual objects to object types, facts to fact types, and adding constraints, a schema for any domain can be constructed.

Fact-based schemas have many advantages, including being highly amenable to construction of natural verbalizations, since they were originally derived from such verbalizations.

However, it is not common practice to consider queries during modeling, so only limited attention has been paid to how to model them. Queries are not only useful for extracting data, but also to express complex business constraints. An effective meta-model for fact-based queries is presented, as an extension of a tiny subset of the meta-model of Object Role Modeling. Examples expressed in the Constellation Query Language show how to populate the query meta-model.

Clifford Heath
Verbalizing ORM Models in Malay and Mandarin

The rich graphical notations provided by fact-oriented modeling approaches such as Object-Role Modeling (ORM) for capturing business constraints assist modelers to visualize fine details of their data models. However, the data models themselves are best validated with domain experts by verbalizing the models in a controlled natural language, and by populating the relevant fact types with concrete examples. While a number of fact-based modeling tools provide extensive verbalization support in English, comparatively little work exists to provide fact-based model verbalization support for other languages, especially Asian languages. This paper describes our initial work on verbalizing ORM models in Bahasa Malaysia (Malay) and Mandarin. We discuss aspects of these languages that are not found in English, which require special treatment in order to render natural verbalization (e.g. noun classifiers, and the order in which sentence elements are placed), and describe our current implementation efforts, which involved creating both a prototype and an extension to the NORMA (Natural ORM Architect) tool.

Shin Huei Lim, Terry Halpin

Workshop on Semantics and Decision Making (SeDeS) 2013

SeDeS 2013 PC Co-Chairs Message

Decision support has gradually evolved in both the fields of theoretical decision support studies and practical assisting tools for decision makers since the 1960’s. Ontology Engineering (OE) brings new synergy to decision support. On the one hand, it will change (and actually now is changing) the decision support landscape, as it will enable new breeds of decision models, decision support systems (DSS) to be developed. On the other hand, DSS can bring theories and applications that support OE, such as ontology integration, ontology matching and ontology integration.

Avi Wasser, Jan Vanthienen
Minimizing Risks of Decision Making by Inconsistency-Tolerant Integrity Management

In practice, knowledge-based reasoning for decision making must be inconsistency-tolerant since, for decision making, contradictory data are unavoidable. We present a measure-based concept of inconsistency-tolerant knowledge engineering for decision support. It enables the preservation of consistency across updates, as well as the computation of sound answers to queries in knowledge bases with violated integrity. Hence, our framework supports the consistency of decision making in the presence of contradictory data. By an extended example, we show how inconsistency-tolerant integrity maintenance can minimize risks in decision making that result from inconsistent knowledge.

Hendrik Decker
Modelling Ontology-Based Decision Rules for Computer-Aided Design

A challenge in computer-aided design is how to parameterize shape functions in order to model a desired shape. Such design phases often include both systematic design and user-oriented design. In the paper, we want to focus on the latter case by bringing ontology-based decision rules to a computer-aided design system. Domain specific constraints and operative rules will be modelled in an artefact called semantic decision table.

Ioana Ciuciu, Yan Tang Demey
Decision Support for Operational and Financial Risk Management - The ProcessGene Case Study

This work suggests a generic framework for risk related decision making from a business process management viewpoint. The framework is based on the methodology embedded in the ProcessGene Risk Management software suite. The suggested method aims to assist risk managers in making risk related decisions along the entire lifecycles of risk management, governance and compliance. This decision making is based on knowledge that is encapsulated within existing business process repositories. The method is demonstrated using a real-life process repository from a manufacturing industry.

Maya Lincoln, Avi Wasser
Selection of Business Process Alternatives Based on Operational Similarity for Multi-subsidiary Organizations

This work suggests a method for machine-assisted support for multi-subsidiary organizations in selecting business process alternatives, based on operational similarity. Operational similarity between processes can be derived from process repositories using a linguistic analysis of process descriptors. The suggested method aims to assist operation managers in multi-subsidiary organizations in identifying similar processes, that can substitute processes that cannot be carried out within a certain subsidiary. This decision making is based on knowledge that is encapsulated within existing business process repositories. The method is demonstrated using a real-life process repository from the paper manufacturing industry.

Avi Wasser, Maya Lincoln

Workshop on Social Media Semantics (SMS) 2013

SMS 2013 PC Co-Chairs Message

The SMS workshop 2013 on Social Media Semantics was held this year in the context of the OTM (”OnTheMove”) federated conferences, covering different aspects of distributed information systems in September 2013 in Graz.

The topic of the workshop is about semantics in Social Media. The SocialWeb has become the first and main medium to get and spread information. Everyday news is reported instantly, and social media has become a major source for broadcasters, news reporters and political analysts as well as a place of interaction for everyday people. For a full utilization of this medium, information must be gathered, analyzed and semantically understood. In this workshop we ask the question: how can Semantic Web technologies be used to provide the means for interested people to draw conclusions, assess situations and to preserve their findings for future use?

Dimitris Spiliotopoulos, Thomas Risse, Nina Tahmasebi
On the Identification and Annotation of Emotional Properties of Verbs

Adequate and reliable lexical resources are essential for effective sentiment analysis and opinion mining. This paper proposes a methodology for the emotional assessment and annotation of words. The process is based on the Self Assessment Manikin test, and is coupled with two psychometric measurements for identifying possible bias due to the annotator’s psychological condition and personality: the EPQ scale and the SCL-90-R scale. A web based tool was developed to support the process. The methodology was validated through a pilot study in which 10 participants were asked to assess the emotional state elicited by each of 75 verbs that were used as stimuli. Results are compared with SentiWordNet’s emotional scoring on respective verbs, and primarily show logical continuity and consistency.

Nikolaos Papatheodorou, Pepi Stavropoulou, Dimitrios Tsonos, Georgios Kouroupetroglou, Dimitris Spiliotopoulos, Charalambos Papageorgiou
Assessing the Coverage of Data Collection Campaigns on Twitter: A Case Study

Online social networks provide a unique opportunity to access and analyze the reactions of people as real-world events unfold. The quality of any analysis task, however, depends on the appropriateness and quality of the collected data. Hence, given the spontaneous nature of user-generated content, as well as the high speed and large volume of data, it is important to carefully define a data-collection campaign about a topic or an event, in order to maximize its coverage (recall). Motivated by the development of a social-network data management platform, in this work we evaluate the coverage of data collection campaigns on Twitter. Using an adaptive language model, we estimate the coverage of a campaign with respect to the total number of relevant tweets. Our findings support the development of adaptive methods to account for unexpected real-world developments, and hence, to increase the recall of the data collection processes.

Vassilis Plachouras, Yannis Stavrakas, Athanasios Andreou
Structuring the Blogosphere on News from Traditional Media

News and social media are emerging as a dominant source of information for numerous applications. However, their vast unstructured content present challenges to efficient extraction of such information. In this paper, we present the SYNC3 system that aims to intelligently structure content from both traditional news media and the blogosphere. To achieve this goal, SYNC3 incorporates innovative algorithms that first model news media content statistically, based on fine clustering of articles into so-called “news events”. Such models are then adapted and applied to the blogosphere domain, allowing its content to map to the traditional news domain. In this paper an unsupervised approach to do-main adaptation is presented, which exploits external knowledge sources in order to port a classification model into a new thematic domain. Our approach extracts a new feature set from documents of the target domain, and tries to align the new features to the original ones, by exploiting text relatedness from external knowledge sources, such as WordNet. The approach has been evaluated on the task of document classification, involving the classification of newsgroup postings into 20 news groups.

Georgios Petasis
Specifying a Semantic Wiki Ontology through a Collaborative Reconceptualisation Process

This paper describes an action-research approach to the specification of an ontology to be applied in the information organisation of a community of forest planning experts. Like many others, a community of forest planning experts does not see their technical domains in unison, rather it voices several points of view that need to be shared and understood. This research started by addressing the practical problem of achieving an effective information structure and organisation for a semantic wiki platform. This was supported by a method and platform for the collaborative specification of ontologies: conceptME. Simultaneously, an empirical study was carried out aiming at understanding better how a technical community pragmatically develops conceptual representations of a domain. The results of this research show the benefits of collaboration in the development of conceptual models for knowledge organisation and information retrieval.

António Lucas Soares, Cristovão Sousa, Carla Pereira

Workshop on SOcial and MObile COmputing for Collaborative Environments (SOMOCO) 2013

SOMOCO 2013 PC Co-Chairs Message

Social Computing considers relationships between the evolution of Information and Communication Technologies (ICTs) and the pervasiveness of new devices and embedded sensors, which enable a wide access by people and an effective use of new services.

Fernando Ferri, Patrizia Grifoni, Arianna D’Ulizia, Maria Chiara Caschera, Irina Kondratova
Analyzing Linked Data Quality with LiQuate

In the last years, the number of datasets in the Linking Open Data (LOD) cloud and the applications that rely on links between these datasets to discover patterns or potential new associations, have exploded. However, because of data source heterogeneity, published data may suffer of redundancy, inconsistencies or may be incomplete; thus, results generated by linked data based applications may be imprecise or unreliable. We illustrate LiQuate (

Li

nked Data

Qua

li

t

y Ass

e

ssment), a tool that combines Bayesian Networks and rule-based systems to analyze the quality of data and links in the LOD cloud.

Edna Ruckhaus, Oriana Baldizán, María-Esther Vidal
A Prufer Sequence Based Approach to Measure Structural Similarity of XML Documents

XML is a W3C standard for exchange of semi-structured data. For many applications it is necessary to extract information from semi-structured data which is a complex task. In this paper we address the problem of computing structural similarity of XML documents which play a crucial role in clustering process. Previous works on path based approach fail to capture the sibling relationship among the nodes and also ignore the similarity when the nodes in the paths to be matched, are not in the same order but still convey same semantics. Another weakness of this approach is in the case of the partial path match, is that the level information is not taken into account when the nodes to be compared appear in different hierarchical level. To address these issues, we describe a method based on Prufer Sequence for measuring the structural similarity of XML documents, in this paper. Benefit of Prufer sequence based representation is that, it stores the ancestor-descendant and sibling relation. XML trees are encoded based on Prufer sequence which establishes a one-to-one mapping between XML tree and sequence. Instead of extracting all paths only common nodes are extracted based on Prufer sequence code. We have devised an algorithm to compute similarity by exploring all relations among the common nodes namely parent-child, ancestor-descendant and sibling. The experimental results show that the proposed approach is effective.

Ramanathan Periakaruppan, Rethinaswamy Nadarajan

Social Networking and Social Media

Recommendations Given from Socially-Connected People

This paper presents how relationships among members of a social network can be used to explicitly specify the relevant features of a friendsourcing recommendation algorithm. One important contribution is to show how to conceptualize previous evaluations of items made by socially-connected users and the different features involved in this kind of algorithms, in a set of criteria for similarity between users in a social network. The paper presents how these specified criteria are used by the proposed friendsourcing recommendation algorithm and shows how the recommendation algorithm is integrated into a real recommender system to be used in a healthcare social network for the medical service of a university. Moreover, the work shows preliminary results which indicate that the information contained in social networks, processed with the proposed algorithm, is relevant for the generation of personalized recommendations.

Daniel González, Regina Motz, Libertad Tansini
A Framework to Promote and Develop a Sustainable Tourism by Using Social Media

The paper provides a framework which involves different Social Media (such as Social networks, Wiki, Podcasting, Blogs and Really Simple Syndication) able to promote and develop a sustainable tourism in a shared and participatory approach. More in detail, the paper starts from an analysis of the tools used in the old and the new economy, by highlighting the technological tourism trend . Then the paper illustrates how Social Media can develop a sustainable tourism involving both tourists and local communities. Finally, future scenarios of virtual tourism are given.

Tiziana Guzzo, Alessia D’Andrea, Fernando Ferri, Patrizia Grifoni
Measuring Betweenness Centrality in Social Internetworking Scenarios

The importance of the betweenness centrality measure in (on-line) social networks is well known, as well as its possible applications to various domains. However, the classical notion of betweenness centrality is not able to capture the centrality of nodes w.r.t. paths crossing different social networks. In other words, it is not able to detect those nodes of a multi-social-network scenario (called Social Internetworking Scenario) which play a central role in inter-social-network information flows. In this paper, we propose a new measure of betweenness centrality suitable for Social Internetworking Scenarios, also applicable to the case of different communities of the same social network. The new measure has been tested in a number of synthetic networks, highlighting the significance and effectiveness of our proposal.

Francesco Buccafurri, Gianluca Lax, Serena Nicolazzo, Antonino Nocera, Domenico Ursino

Methods, Models and Applications in Web and Mobile Computing

Efficient Solution of the Correlation Clustering Problem: An Application to Structural Balance

One challenge for social network researchers is to evaluate balance in a social network. The degree of balance in a social group can be used as a tool to study whether and how this group evolves to a possible balanced state. The solution of clustering problems defined on signed graphs can be used as a criterion to measure the degree of balance in social networks. By considering the original definition of the structural balance, the optimal solution of the Correlation Clustering (CC) Problem arises as one possible measure. In this work, we contribute to the efficient solution of the CC problem by developing sequential and parallel GRASP metaheuristics. Then, by using our GRASP algorithms, we solve the problem of measuring the structural balance of large social networks.

Lúcia Drummond, Rosa Figueiredo, Yuri Frota, Mário Levorato
Towards Assessment of Open Educational Resources and Open Courseware Based on a Socio-constructivist Quality Model

In this paper we introduce a rubric for assessing quality of open educational resources and open courseware based on our socio-constructivist quality model (QORE) that includes 70 criteria grouped in four categories related with content, instructional design, technology, and courseware evaluation. Quality is assessed from an educational point of view, i.e. how useful are such resources for various actors involved in educational processes taken into account their goals, objectives, abilities etc. QORE’s focus is on the resources’ potential to act as true

open educational content available online

that has a genuine educational value in this context. Several challenges of using this rubric for evaluation of such educational resources are discussed as well.

Monica Vladoiu, Zoran Constantinescu
Multimodal Interaction in Gaming

Gaming environments are applications that have the great potential to increase people engagement in a participatory and collaborative way. Players interact with games under various situations, where the content, the form, and the modalities will be manipulated to fit the player’s behaviours. This paper provides a multimodal environment for gaming by using a grammar-based approach for supporting the interaction process in the application scenario of scope card game, instantiating grammar by the elements and the rules of the game. Moreover, the paper focuses on the correct interpretation of the player’s input during the game by the use of a HMM-based approach.

Maria Chiara Caschera, Arianna D’Ulizia, Fernando Ferri, Patrizia Grifoni

Cooperative Information Systems (CoopIS) 2013 Posters

CoopIS 2013 PC Co-Chairs Message

Cooperative Information Systems (CIS) enable, support, and facilitate cooperation between people, organizations, and information systems. CIS provide enterprises and user communities with flexible, scalable and intelligent services to work together in large-scale networking environments. The CIS paradigm integrates several technologies: distributed systems technologies (such as middleware, cloud computing), coordination technologies (such as business process management) and integration technologies (such as service oriented computing, semantic web).

Johann Eder, Zohra Bellahsene, Rania Y. Khalaf
Repairing Event Logs Using Timed Process Models

Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect.

In this work, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks.

Andreas Rogge-Solti, Ronny S. Mans, Wil M. P. van der Aalst, Mathias Weske
AnonyFacebook - Liking Facebook Posts Anonymously

In several countries the simple act of liking (on Facebook) an anti-government article or video can be (and has already been) used to pursue and detain activists. Given such a scenario, it is of great relevance to allow anyone to anonymously ”like” any post.

In this paper we present anonyFacebook, a system that allows Facebook users to ”like” a post (e.g., news, photo, video) without revealing their identity (even to the social network administrators). Obviously, such anonymous ”likes” count to the total number of ”likes”. Anonymous ”likes” are ensured by means of cryptographic techniques such as homomorphic encryption and shared threshold key pairs.

Pedro Alves, Paulo Ferreira
E-Contract Enactment Using Meta Execution Workflow

E-contract fulfillment has many challenges because enactment is cross organizational and it involves interdependencies among its various elements namely parties, activities, clauses, exceptions, payments and commitments. E-contract needs inter organizational workflow services for monitoring its enactment. In this paper, we introduce the concept of e-contract elements based workflow views in order to bridge the gap between different aspects of e-contract and the services provided by a meta-workflow system. The enactment of e-contract is carried out using the meta execution workflow. It enables coordination among the workflow views and the workflow system for successful fulfillment.

Pabitra Mohapatra, Pisipati Radha Krishna, Kamalakar Karlapalem
Difference-Preserving Process Merge

Providing merging techniques for business processes fosters the management and maintenance of (large) process model repositories. Contrary to existing approaches that focus on preserving behavior of all participating process models, this paper presents a merging technique that aims at preserving the difference between the participating process models by exploiting the existence of a common parent process, e.g., a reference or standard process model.

Kristof Böhmer, Stefanie Rinderle-Ma
Towards Personalized Search for Tweets

Powerful search capabilities are fundamentally important for micro-blog-based information systems such as Twitter. While recently there has been some works aimed at enhancing the scalability of micro-blog search, very few existing techniques incorporate personalization into their search and ranking processes. This paper argues that since Twitter is a social network (SN)-based micro-blog system, it is essential to personalize search results taking into account the social relationships among various users. In this paper, we outline a scalable and personalized tweet search framework that takes into account the search parameters, the distances of the follower relationships, and the temporal aspects of the tweets when ranking the search results.

Akshay Choche, Lakshmish Ramaswamy
Linked Data Based Expert Search and Collaboration for Mashup

Web mashup is becoming more and more popular for both general and enterprise purposes in order to implement applications that leverage on third party components. However, developing a mashup requires specialized knowledge about Web APIs, their technologies and the way to combine them in a meaningful way. For this problem, we describe in this paper an approach for searching experts in the context of enterprise mashup development and in particular, we describe how to implement typical expert search patterns. The approach is based on the integration of knowledge both internal and external to the enterprise and represented as a linked data.

Devis Bianchini, Valeria De Antonellis, Michele Melchiori

Ontologies, DataBases, and Applications of Semantics (ODBASE) 2013 Posters

ODBASE 2013 PC Co-Chairs Message

We are delighted to present the proceedings of the 12th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE) which was held in Graz (Austria) on September 10-12, 2013. The ODBASE Conference series provides a forum for research and practitioners on the use of ontologies and data semantics in novel applications, and continues to draw a highly diverse body of researchers and practitioners. ODBASE is part of On the Move to Meaningful Internet Systems (OnTheMove) that co-locates with two other conferences: DOA-Trusted Cloud (International Conference on Secure Virtual Infrastructures) and CoopIS (International Conference on Cooperative Information Systems).

Pieter De Leenheer, Deijing Dou
Imposing a Semantic Schema for the Detection of Potential Mistakes in Knowledge Resources

Nowadays, there is a pressing need for very accurate, up-to-date and diversity-aware knowledge resources. As their maintenance is very expensive, we argue that the only affordable way to address this is by complementing automatic with manual checks. This paper presents an approach, based on the notion of

semantic schema

, which aims to minimize human intervention as it allows the automatic identification of potentially faulty parts of a knowledge resource which need manual checks. Our evaluation showed promising results.

Vincenzo Maltese
Towards Efficient Stream Reasoning

We present a stream reasoning system, QUARKS, which has features like knowledge packets, application managed window and incremental query. Combination of rules and continuous queries along with application optimization has been used to address high performance requirements. Experimental results show that our proposed methodology is effective.

Debnath Mukherjee, Snehasis Banerjee, Prateep Misra
From Theoretical Framework to Generic Semantic Measures Library

Semantic Measures (SMs) are of critical importance in multiple treatments relying on ontologies. However, the improvement and use of SMs are currently hampered by the lack of a dedicated theoretical framework and an extensive generic software solution. To meet these needs, this paper introduces a unified theoretical framework of graph-based SMs, from which we developed the open source Semantic Measures Library and toolkit, a solution that paves the way for design, computation and analysis of SMs. More information at dedicated website: http://www.semantic-measures-library.org.

Sébastien Harispe, Stefan Janaqi, Sylvie Ranwez, Jacky Montmain
Categorization of Modeling Language Concepts: Graded or Discrete?

We investigate the category structure of categories common to conceptual modeling languages (i.e., the types used by languages such as actor, process, goal, or restriction) to study whether they more closely approximate a discrete or graded category. We find that most categories exhibit more of a graded structure, with experienced modelers displaying this even more strongly than the other participants. We discuss the consequences of these results for (conceptual) modeling in general.

Dirk van der Linden
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems: OTM 2013 Workshops
herausgegeben von
Yan Tang Demey
Hervé Panetto
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-41033-8
Print ISBN
978-3-642-41032-1
DOI
https://doi.org/10.1007/978-3-642-41033-8

Premium Partner