Skip to main content
main-content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of six international workshops held in Tallinn, Estonia, in conjunction with the 30th International Conference on Advanced Information Systems Engineering, CAiSE 2018, in June 2018.

These workshops were:

– The 5th Workshop on Advances in Services DEsign based on the Notion of Capability (ASDENCA)

– The 1st Workshop on Business Data Analytics: Techniques and Applications (BDA)

– The 1st Workshop on Blockchains for Inter-Organizational Collaboration (BIOC)

– The 6thWorkshop on Cognitive Aspects of Information Systems Engineering (COGNISE)

– The 2nd Workshop on Enterprise Modeling

– The 1st Workshop on Flexible Advanced Information Systems (FAiSE)

Two more workshops decided to produce their own, independent proceedings.

The 22 full papers presented here were carefully reviewed and selected from a total of 49 submissions.

Inhaltsverzeichnis

Frontmatter

ASDENCA – Advances in Services Design Based on the Notion of Capability

Frontmatter

Validation of Capability Modeling Concepts: A Dialogical Approach

Involvement of potential users in early stages of elaboration of development methods is needed for successful method adoption in practice. This paper reports on activities of introduction and assessment of the Capability Driven Development (CDD) methodology with a group of industry representatives. This was performed in an interactive workshop and the main evaluation objectives were to assess the relevance of the CDD concepts and their recognizability as well as to identify potential use cases for CDD application. A dialogical approach was used to convey the CDD methodology to the participants and to entice discussions. The main findings are that the participants easily recognized the modeling constructs for capability design. They found that adjustments are particularly useful for the purpose of identification capability steering actions. The use cases described by the participants were later formalized as capability models.

Jānis Grabis, Janis Stirna, Lauma Jokste

Towards Improving Adaptability of Capability Driven Development Methodology in Complex Environment

We are triggered to incorporate adaptability in information system designs and methodologies corresponding to complex and unpredictable environment of today and tomorrow and to complex adaptive systems they are aimed for. Adaptability as non-functional requirement is being portrayed and investigated from broad multidisciplinary perspective that influences how dynamic business-IT alignment can be accomplished. Capability Driven Development methodology has supported delivering dynamic capabilities by providing context-aware self-adaptive platform in the CaaS project implementations, as our case study. Along with the already incorporated mechanisms, components that enable adaptability, there is open space for further evolutionary and deliberate change towards becoming truly appropriate methodology for dynamic reconfigurations of capabilities in organizations and business ecosystems that operate in complexity and uncertainty. The analysis and evaluation of adaptability of the CDD methodology through three dimensions (complexity of the external and internal environment, managerial profiling and artifact-integrated components) in this paper conclude with instigation of starting points towards achieving higher adaptability for complexity of the CDD methodology.

Renata Petrevska Nechkoska, Geert Poels, Gjorgji Manceski

Using Open Data to Support Organizational Capabilities in Dynamic Business Contexts

In essence, Open Data (OD) is the information available in a machine-readable format and without restrictions on the permissions for using or distributing it. Open Data may include textual artifacts, or non-textual, such as images, maps, scientific formulas, and other. The data can be publicized and maintained by different entities, both public and private. The data are often federated, meaning that various data sources are aggregated in data sets at a single “online” location. Despite its power to distribute free knowledge, OD initiatives face some important challenges related to its growth. In this paper, we consider one of them, namely, the business and technical concerns of OD clients that would make them able to utilize Open Data in their enterprise information systems and thus benefit in terms of improvements of their service and products in continuous and sustainable ways. Formally, we describe these concerns by means of high-level requirements and guidelines for development and run-time monitoring of IT-supported business capabilities, which should be able to consume Open Data, as well as able to adjust when the data updates based on a situational change. We illustrated our theoretical proposal by applying it on the service concerning regional roads maintenance in Latvia.

Jelena Zdravkovic, Janis Kampars, Janis Stirna

Capability Management in the Cloud: Model and Environment

Capabilities represent key abilities of an enterprise and they encompass knowledge and resources needed to realize these abilities. They are developed and delivered in various modes including in-house and as a service delivery. The as a service delivery mode is provided in the cloud environment. The cloud-based approach allows offering capabilities possessed by the service provider to a large number of potential consumers, supports quick deployment of the capability delivery environment and enables information sharing among the users. The paper describes a cloud-based capability management model, which support multi-tenant and private modes. The architecture and technology of the cloud-based capability development and delivery environment is elaborated. The pattern repository shared among capability users is a key component enabling information sharing. Additionally, this paper also shows usage of the cloud-based capability and delivery environment to build cloud native capability delivery applications.

Jānis Grabis, Jānis Kampars

BDA – Business Data Analytics: Techniques and Applications

Frontmatter

Using BPM Frameworks for Identifying Customer Feedback About Process Performance

Every organization has business processes, however, there are numerous organizations in which execution logs of processes are not available. Consequently, these organizations do not have the opportunity to exploit the potential of execution logs for analyzing the performance of their processes. As a first step towards facilitating these organizations, in this paper, we argue that customer feedback is a valuable source of information that can provide important insights about process performance. However, a key challenge to this approach is that the feedback includes a significant amount of comments that are not related to process performance. Therefore, utilizing the complete feedback without omitting the irrelevant comments may generate misleading results. To that end, firstly, we have generated a customer feedback corpus of 3356 comments. Secondly, we have used two well-established BPM frameworks, Devil’s Quadrangle and Business Process Redesign Implementation framework, to manually classify the comments as relevant and irrelevant to process performance. Finally, we have used five supervised learning techniques to evaluate the effectiveness of the two frameworks for their ability to automatically identify performance relevant comments. The results show that Devil’s Quadrangle is more suitable framework than Business Process Redesign Implementation framework.

Sanam Ahmad, Syed Irtaza Muzaffar, Khurram Shahzad, Kamran Malik

Increasing Trust in (Big) Data Analytics

Trust is a key concern in big data analytics (BDA). Explaining “black-box” models, demonstrating transferability of models and robustness to data changes with respect to quality or content can help in improving confidence in BDA. To this end, we propose metrics for measuring robustness with respect to input noise. We also provide empirical evidence by showcasing how to compute and interpret these metrics using multiple datasets and classifiers. Additionally, we discuss the state-of-the-art of various areas in machine learning such as explaining “black box” models and transfer learning with respect to model validation. We show how methods from these areas can be adjusted to support classical validity measures in science such as content validity.

Johannes Schneider, Joshua Peter Handali, Jan vom Brocke

Building Payment Classification Models from Rules and Crowdsourced Labels: A Case Study

The ability to classify customer-to-business payments enables retail financial institutions to better understand their customers’ expenditure patterns and to customize their offerings accordingly. However, payment classification is a difficult problem because of the large and evolving set of businesses and the fact that each business may offer multiple types of products, e.g. a business may sell both food and electronics. Two major approaches to payment classification are rule-based classification and machine learning-based classification on transactions labeled by the customers themselves (a form of crowdsourcing). The rules-based approach is not scalable as it requires rules to be maintained for every business and type of transaction. The crowdsourcing approach leads to inconsistencies and is difficult to bootstrap since it requires a large number of customers to manually label their transactions for an extended period of time. This paper presents a case study at a financial institution in which a hybrid approach is employed. A set of rules is used to bootstrap a financial planner that allowed customers to view their transactions classified with respect to 66 categories, and to add labels to unclassified transactions or to re-label transactions. The crowdsourced labels, together with the initial rule set, are then used to train a machine learning model. We evaluated our model on real anonymised dataset, provided by the financial institution which consists of wire transfers and card payments. In particular, for the wire transfer dataset, the hybrid approach increased the coverage of the rule-based system from 76.4% to 87.4% while replicating the crowdsourced labels with a mean AUC of 0.92, despite inconsistencies between crowdsourced labels.

Artem Mateush, Rajesh Sharma, Marlon Dumas, Veronika Plotnikova, Ivan Slobozhan, Jaan Übi

BIOC – Blockchains for Inter-Organizational Collaboration

Frontmatter

Combining Artifact-Driven Monitoring with Blockchain: Analysis and Solutions

The adoption of blockchain to enable a trusted monitoring of multi-party business processes is recently gaining a lot of attention, as the absence of a central authority increases the efficiency and the effectiveness of the delivery of monitoring data. At the same time, artifact-driven monitoring has been proposed to create a flexible monitoring platform for multi-party business processes involving an exchange of goods (e.g., in the logistics domain), where the information delivery does not require a central authority but it lacks of sufficient level of trust. The goal of this paper is to analyze the dependencies among these two areas of interests, and to propose two possible monitoring platforms that exploit blockchain to achieve a trusted artifact-driven monitoring solution.

Giovanni Meroni, Pierluigi Plebani

Ensuring Resource Trust and Integrity in Web Browsers Using Blockchain Technology

Current web technology allows the use of cryptographic primitives as part of server-provided Javascript. This may result in security problems with web-based services. We provide an example for an attack on the WhisperKey service. We present a solution which is based on human code reviewing and on CVE (Common Vulnerabilities and Exposures) data bases. In our approach, existing code audits and known vulnerabilities are tied to the Javascript file by a tamper-proof Blockchain approach and are signaled to the user by a browser extension. The contribution explains our concept and its workflow; it may be extended to all situations with modular, mobile code. Finally, we propose an amendment to the W3C subresource recommendation.

Clemens H. Cap, Benjamin Leiding

Document Management System Based on a Private Blockchain for the Support of the Judicial Embargoes Process in Colombia

In recent years, the conglomeration of financial and governmental entities with responsibility for the judicial embargoes process in Colombia has met with serious problems implementing an efficient system that does not incur major cost overruns. Given the large number of participants involved, development of a centralized document management system was always deemed to be unsuitable, so that the entire process of sending and receiving documents of attachments is still carried out today in physical form, by postal mail. This article presents the development of a pilot application that instead enables embargo documents to be published and distributed within a document management system that nevertheless guarantees the confidentiality, availability and reliability of all information registered in the blockchain. On developing this solution, the very nature of blockchain was found to facilitate not only the availability and distribution of the documents, but the process of monitoring and controlling changes in them. As a result, each participant in the network always obtains an accepted version of revised documents, thus reducing costs and facilitating a greater collaboration among the participating entities.

Julian Solarte-Rivera, Andrés Vidal-Zemanate, Carlos Cobos, José Alejandro Chamorro-Lopez, Tomas Velasco

Towards a Design Space for Blockchain-Based System Reengineering

We discuss our ongoing effort in designing a methodology for blockchain-based system reengineering. In particular, we focus in this paper on defining the design space, i.e., the set of options available to designers when applying blockchain to reengineer an existing system. In doing so, we use a practice-driven approach, in which this design space is constructed bottom-up from analysis of existing blockchain use cases and hands-on experience in real world design case studies. Two case studies are presented: using blockchain to reengineer the meat trade supply chain in Mongolia and blockchain-based management of ERP post-implementation modifications.

Marco Comuzzi, Erdenekhuu Unurjargal, Chiehyeon Lim

Towards Collaborative and Reproducible Scientific Experiments on Blockchain

Business process management research opened numerous opportunities for synergies with blockchains in different domains. Blockchains have been identified as means of preventing illegal runtime adaptation of decentralized choreographies that involve untrusting parties. In the eScience domain however there is a need to support a different type of collaboration where adaptation is essential part of that collaboration. Scientists demand support for trial-and-error experiment modeling in collaboration with other scientists and at the same time, they require reproducible experiments and results. The first aspect has already been addressed using adaptable scientific choreographies. To enable trust among collaborating scientists in this position paper we identify potential approaches for combining adaptable scientific choreographies with blockchain platforms, discuss their advantages and point out future research questions.

Dimka Karastoyanova, Ludwig Stage

COGNISE – Cognitive Aspects of Information Systems Engineering

Frontmatter

The Origin and Evolution of Syntax Errors in Simple Sequence Flow Models in BPMN

How do syntax errors emerge? What is the earliest moment that potential syntax errors can be detected? Which evolution do syntax errors go through during modeling? A provisional answer to these questions is formulated in this paper based on an investigation of a dataset containing the operational details of 126 modeling sessions. First, a list is composed of the different potential syntax errors. Second, a classification framework is built to categorize the errors according to their certainty and severity during modeling (i.e., in partial or complete models). Third, the origin and evolution of all syntax errors in the dataset are identified. This data is then used to collect a number of observations, which form a basis for future research.

Joshua De Bock, Jan Claes

Mining Developers’ Workflows from IDE Usage

An increased understanding of how developers’ approach the development of software and what individual challenges they face, has a substantial potential to better support the process of programming. In this paper, we adapt Rabbit Eclipse, an existing Eclipse plugin, to generate event logs from IDE usage enabling process mining of developers’ workflows. Moreover, we describe the results of an exploratory study in which the event logs of 6 developers using Eclipse together with Rabbit Eclipse were analyzed using process mining. Our results demonstrate the potential of process mining to better understand how developers’ approach a given programming task.

Constantina Ioannou, Andrea Burattin, Barbara Weber

Designing for Information Quality in the Era of Repurposable Crowdsourced User-Generated Content

Conventional wisdom holds that expert contributors provide higher quality user-generated content (UGC) than novices. Using the cognitive construct of selective attention, we argue that this may not be the case in some crowd-sourcing UGC applications. We argue that crowdsourcing systems that seek participation mainly from contributors who are experienced or have high levels of proficiency in the crowdsourcing task will gather less diverse and therefore less repurposable data. We discuss the importance of the information diversity dimension of information quality for the use and repurposing of UGC and provide a theoretical basis for our position, with the goal of stimulating empirical research.

Shawn Ogunseye, Jeffrey Parsons

Test First, Code Later: Educating for Test Driven Development

Teaching Case

As software engineering (SE) and information systems (IS) projects become more and more of collaborative nature in practice, project-based courses become an integral part of IS and SE curricula. One major challenge in this type of courses is students’ tendency to write test cases for their projects at a very late stage, often neglecting code coverage. This paper presents a teaching case of a Test-Driven Development (TDD) workshop that was conducted during a SE course intended for senior undergraduate IS students. The students were asked to write test cases according to TDD principles, and then develop code meeting test cases received from their peers. Students’ perceptions towards TDD were found to be quite positive. This experience indicates that instructing SE courses according to TDD principles, where test cases are written at the beginning of the project, may have positive effect on students’ code development skills and performance in general, and on their understanding of TDD in particular. These findings are informative for both education researchers and instructors who are interested in embedding TDD in IS or SE education.

Naomi Unkelos-Shpigel, Irit Hadar

Workshop on Enterprise Modeling

Frontmatter

The “What” Facet of the Zachman Framework – A Linked Data-Driven Interpretation

The recommended interpretation of the “What” facet in the Zachman Framework is that it serves as a data-centric viewpoint on the enterprise, capturing data requirements across several layers of abstraction – from high-level business concepts down to implemented data entities. In enterprise modelling, these have been traditionally approached through well-established practices and modelling techniques – i.e., Entity-Relationship models, UML class models and other types of popular data model types. In the current context of digital transformation and agile enterprise relying on distributed information systems, certain technological specifics are lost when employing traditional methods acting on a high level of abstraction. For example, the Linked Data paradigm advocates specific data distribution, publishing and retrieval techniques that would be useful if assimilated on a modelling level - in what could be characterised as technology-specific modelling methods (mirroring the field of domain-specific languages, but from a technological perspective). This paper proposes an agile modelling language that provides a diagrammatic and, at the same time, machine-readable integration of several of the Zachman Framework facets. In this language, the “What” facet covers concepts met in a Linked Enterprise Data environment – e.g., graph servers, graph databases, RESTful HTTP requests. These have been conceptualised in the proposed language and implemented in a way that allows the generation of a particular kind of code – process-driven orchestration of PHP-based SPARQL client requests.

Alisa Harkai, Mihai Cinpoeru, Robert Andrei Buchmann

An Application Design for Reference Enterprise Architecture Models

An increasing number of regulations forces financial institutes to implement a holistic and efficient regulatory compliance management (RCM). Since most institutes primarily implement isolated solutions in a deadline-triggered manner, reference enterprise architectures (R-EA) help them to save costs and increase the quality of their RCM approaches, because they reveal implications regulation has on their business, information and IT architecture. The application of such a R-EA to a specific institute is a context-dependent task and requires an intensive knowledge transfer between R-EA constructor and its user. However, the majority of research activities focuses on R-EA construction, while contributions regarding its application are scarce. Thus, this works presents an application design of a R-EA in the context of RCM, which systematically documents in what context the R-EA can be applied and what benefits it offers to its user. Using design science research (DSR), we contribute to research and practice suggesting a framework for R-EA application and apply it in the RCM context.

Felix Timm

Towards an Agile and Ontology-Aided Modeling Environment for DSML Adaptation

The advent of digitalization exposes enterprises to an ongoing transformation with the challenge to quickly capture relevant aspects of changes. This brings the demand to create or adapt domain-specific modeling languages (DSMLs) efficiently and in a timely manner, which, on the contrary, is a complex and time-consuming engineering task. This is not just due to the required high expertise in both knowledge engineering and targeted domain. It is also due to the sequential approach that still characterizes the accommodation of new requirements in modeling language engineering. In this paper we present a DSML adaptation approach where agility is fostered by merging engineering phases in a single modeling environment. This is supported by ontology concepts, which are tightly coupled with DSML constructs. Hence, a modeling environment is being developed that enables a modeling language to be adapted on-the-fly. An initial set of operators is presented for the rapid and efficient adaptation of both syntax and semantics of modeling languages. The approach allows modeling languages to be quickly released for usage.

Emanuele Laurenzi, Knut Hinkelmann, Stefano Izzo, Ulrich Reimer, Alta van der Merwe

Towards a Risk-Aware Business Process Modelling Tool Using the ADOxx Platform

Business Process modelling is a key element in the management of organizations. It allows to build an analytical representation of ‘as-is’ processes in an organization and compared it with ‘to-be’ processes for improving their efficiency. Besides, although, risk is an element that can affect business process negatively, it is still managed independently. A necessary link is missing between business process and risk models. To better manage risk related to business process, it should be integrated and evaluated dynamically within the business process models. Currently, there are different meta-models allowing business process modelling. Nevertheless, there are few meta-models allowing risk modelling and even fewer ones that integrate both concepts related to risks and business processes. Based on this need and these observations, we propose, in this work, a risk-aware business process modelling tool using the ADOxx meta-modelling platform.

Rafika Thabet, Elyes Lamine, Amine Boufaied, Ouajdi Korbaa, Hervé Pingaud

FAiSE – Flexible Advanced Information Systems

Frontmatter

A Reference Framework for Advanced Flexible Information Systems

The nature of information systems is currently going through a major transition. In the past, information systems managing ‘physical processes’ and systems managing ‘administrative processes’ would usually be separated in two different ‘worlds’. Now, we see that these worlds need to be tightly coupled or even integrated to deal with developments like the transformation of supply chains to demand chains, just-in-time logistics, servitization and mass-customization of products and services. This causes confusion, as positioning systems and approaches underlying these systems with respect to each other is not easy. Improper positioning may in turn lead to blind spots in system functionality - resulting in the inability to properly support the entire spectrum of business functionality - or replication of functionality - usually resulting in inconsistency of functionality and data. To address this issue, this paper presents a reference framework for advanced flexible information systems, in which existing systems and approaches can be positioned, analyzed and compared. The framework is based on a concept of multi-layer, bimodal flexibility. We apply this framework on a small but representative set of research and development efforts for advanced flexible information systems.

Paul Grefen, Rik Eshuis, Oktay Turetken, Irene Vanderfeesten

Integrating IoT Devices into Business Processes

The Internet of Things (IoT) has arrived in everyday life, controlling and measuring everything from assembly lines, through shipping containers to household appliances. Thus, IoT devices are often part of larger and more complex business processes, which might change their course based on events from these devices. However, when developing IoT applications the process perspective is often neglected and coordination of devices is realized in an ad-hoc way using custom scripts. In this paper we propose to employ process model to define the process layer of IoT applications and enact them through a process engine. Our approach thus bridges the gap between physical IoT devices and business processes. The presented implementation shows that those two can be combined without in-depth programming expertise or extensive configuration, without restricting or strongly coupling the components.

Christian Friedow, Maximilian Völker, Marcin Hewelt

Backmatter

Weitere Informationen

Premium Partner

Neuer Inhalt

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Product Lifecycle Management im Konzernumfeld – Herausforderungen, Lösungsansätze und Handlungsempfehlungen

Für produzierende Unternehmen hat sich Product Lifecycle Management in den letzten Jahrzehnten in wachsendem Maße zu einem strategisch wichtigen Ansatz entwickelt. Forciert durch steigende Effektivitäts- und Effizienzanforderungen stellen viele Unternehmen ihre Product Lifecycle Management-Prozesse und -Informationssysteme auf den Prüfstand. Der vorliegende Beitrag beschreibt entlang eines etablierten Analyseframeworks Herausforderungen und Lösungsansätze im Product Lifecycle Management im Konzernumfeld.
Jetzt gratis downloaden!

Bildnachweise