Skip to main content
Top

2019 | Book

Information Systems Engineering in Responsible Information Systems

CAiSE Forum 2019, Rome, Italy, June 3–7, 2019, Proceedings

insite
SEARCH

About this book

This book constitutes the thoroughly refereed proceedings of the CAiSE Forum 2019 held in Rome, Italy, as part of the 31st International Conference on Advanced Information Systems Engineering, CAiSE 2019, in June 2019.

The CAiSE Forum - one of the traditional tracks of the CAiSE conference - aims to present emerging new topics and controversial positions, as well as demonstration of innovative systems, tools and applications related to information systems engineering. This year’s theme was “Responsible Information Systems”.

The 19 full papers and 3 short papers presented in this volume were carefully reviewed and selected from 14 direct submissions (of which 7 full papers were selected), plus 15 transfers from the CAiSE main conference (which resulted in another 12 full and 3 short papers).

Table of Contents

Frontmatter
UBBA: Unity Based BPMN Animator
Abstract
In the last years BPMN became the most prominent notation for representing business processes, thanks to its wide usage in academic and industrial contexts. Despite BPMN is very intuitive, it’s way of representing activities with static flow charts may result effective just for the BPM experts. Stakeholders who are not too much aware of the BPMN notation could misread the behavior of the business process. To this aim, BPMN animation tools can help model comprehension. However they are mainly based on 2D diagrams, just few works investigate the use of a 3D world as an environment for closely portray the reality of the business process. In this paper, we propose our tool UBBA, which creates a custom 3D virtual world from an input .bpmn file. Besides this 3-dimensional view of the diagram, we also integrate into UBBA the semantics of the BPMN elements in order to enable the animation.
Basit Mubeen Abdul, Flavio Corradini, Barbara Re, Lorenzo Rossi, Francesco Tiezzi
Achieving GDPR Compliance of BPMN Process Models
Abstract
In an increasingly digital world, where processing and exchange of personal data are key parts of everyday enterprise business processes (BPs), the right to data privacy is regulated and actively enforced in the Europe Union (EU) through the recently introduced General Data Protection Regulation (GDPR), whose aim is to protect EU citizens from privacy breaches. In this direction, GDPR is highly influencing the way organizations must approach data privacy, forcing them to rethink and upgrade their BPs in order to become GDPR compliant. For many organizations, this can be a daunting task, since little has been done so far to easily identify privacy issues in BPs. To tackle this challenge, in this paper, we provide an analysis of the main privacy constraints in GDPR and propose a set of design patterns to capturing and integrating such constraints in BP models. Using BPMN (Business Process Modeling Notation) as modeling notation, our approach allows us to achieve full transparency of privacy constraints in BPs making it possible to ensure their compliance with GDPR.
Simone Agostinelli, Fabrizio Maria Maggi, Andrea Marrella, Francesco Sapio
How Could Systems Analysis Use the Idea of “Responsible Information System”?
Abstract
“Responsible Information Systems,” the theme of CAISE 2019, is an intriguing idea that has not been explored in the readily available literature. After defining the term information system (IS), this paper applies an initial five-part framework to many examples of problematic IS to explore what the idea of responsible information system (RIS) might mean and how it might be useful. That initial exploration leads to focusing on how the concept of responsibility applies to IS in a way that is useful for systems analysis and design (SA&D). This paper addresses that question by using a new set of ideas related to facets of work system capabilities. Its application of those ideas to an EMR case study imply that they could be applied in identifying ways in which an IS might be more responsible. Overall, this paper illustrates that focusing on responsibilities related to facets of capabilities is more valuable than trying to characterize information systems as responsible or not.
Steven Alter
Discovering Customer Journeys from Evidence: A Genetic Approach Inspired by Process Mining
Abstract
Displaying the main behaviors of customers on a customer journey map (CJM) helps service providers to put themselves in their customers’ shoes. Inspired by the process mining discipline, we address the challenging problem of automatically building CJMs from event logs. In this paper, we introduce the CJMs discovery task and propose a genetic approach to solve it. We explain how our approach differs from traditional process mining techniques and evaluate it with state-of-the-art techniques for summarizing sequences of categorical data.
Gaël Bernard, Periklis Andritsos
Keeping Data Inter-related in a Blockchain
Abstract
Blockchains are gaining a substantial recognition as an alternative to the traditional data storage systems due to their tampering-resistant and decentralized storage, independence from any centralized authority, and low entry barriers for new network participants. Presently blockchains allow users to store transactions and data sets, however, they don’t provide an easy way of creating and keeping relationships between data entities. In this paper we demonstrate a solution that helps software developers maintain relationships between inter-related data entities and datasets in a blockchain. Our solution runs over Ethereum’s Go implementation. This is the first step towards a database management-like middleware system for blockchains.
Phani Chitti, Ruzanna Chitchyan
Finding Non-compliances with Declarative Process Constraints Through Semantic Technologies
Abstract
Business process compliance checking enables organisations to assess whether their processes fulfil a given set of constraints, such as regulations, laws, or guidelines. Whilst many process analysts still rely on ad-hoc, often handcrafted per-case checks, a variety of constraint languages and approaches have been developed in recent years to provide automated compliance checking. A salient example is Declare, a well-established declarative process specification language based on temporal logics. Declare specifies the behaviour of processes through temporal rules that constrain the execution of tasks. So far, however, automated compliance checking approaches typically report compliance only at the aggregate level, using binary evaluations of constraints on execution traces. Consequently, their results lack granular information on violations and their context, which hampers auditability of process data for analytic and forensic purposes. To address this challenge, we propose a novel approach that leverages semantic technologies for compliance checking. Our approach proceeds in two stages. First, we translate Declare templates into statements in SHACL, a graph-based constraint language. Then, we evaluate the resulting constraints on the graph-based, semantic representation of process execution logs. We demonstrate the feasibility of our approach by testing its implementation on real-world event logs. Finally, we discuss its implications and future research directions.
Claudio Di Ciccio, Fajar J. Ekaputra, Alessio Cecconi, Andreas Ekelhart, Elmar Kiesling
The SenSoMod-Modeler – A Model-Driven Architecture Approach for Mobile Context-Aware Business Applications
Abstract
The ubiquity and the low prices of mobile devices like smartphones and tablets as well as the availability of radio networks hold the opportunity for companies to reorganize and optimize their business processes. These mobile devices can help users to execute their process steps by showing instructions or by augmenting reality. Moreover, they can improve the efficiency and effectiveness of business processes by adapting the business process execution. This can be achieved by evaluating the user’s context via the many sensor-data from a smart device and adapting the business process to the current context. The data, not only collected from internal sensors but also via networks from other sources, can be aggregated and interpreted to evaluate the context. To use the advantages of context recognition for business processes an simple way to model this data collection and aggregation is needed. This would enable a more structured way to implement supportive mobile (context-aware) applications. Today, there is no modeling language that supports the modeling of data collection and aggregation to context and offers code generation for mobile applications via a suitable tool. Therefore, this paper presents a domain specific modeling language for context and a model-driven architecture (MDA) based approach for mobile context-aware apps. The modeling language and the MDA-approach have been implemented in an Eclipse-based tool.
Julian Dörndorfer, Florian Hopfensperger, Christian Seel
Blurring Boundaries
Towards the Collective Team Grokking of Product Requirements
Abstract
Software development has become increasingly software ‘product’ development, without an authoritative ‘customer’ stakeholder that many requirements engineering processes assume exists in some form. Many progressive software product companies today are empowering cross-functional product teams to ‘own’ their product – to collectively understand the product context, the true product needs, and manage its on-going evolution – rather than develop to a provided specification.
Some teams do this better than others and neither established requirements elicitation and validation processes nor conventional team leadership practices explain the reasons for these observable differences. This research examines cross-functional product teams and identifies factors that support or inhibit the team’s ability to collectively create and nurture a shared mental model that accurately represents the external product domain and its realities. The research also examines how teams use that collective understanding to shape development plans, internal and external communications, new team member onboarding, etc.
We are engaged with several software product companies using a constructivist Grounded Theory method towards the research question.
Early results are emerging as organisational factors, within and surrounding the teams. One emerging observation relates to the degree to which functional distinctions are treated as demarcations or blurred boundaries. The other observation is the impact an expectation of mobility has on an individual’s sense of feeling part of the collective team versus solely being a functional expert. This also becomes a factor in the first observation.
The research is in-progress but early observations are consistent with a basic element of empathy that a certain blurring of the boundaries is necessary for a period of time in order to better understand the other context. Future research will examine whether the observed organisational factors are pre-conditions for the team being able to collectively understand the context of the product requirements, collectively and deeply.
Rob Fuller
Towards an Effective and Efficient Management of Genome Data: An Information Systems Engineering Perspective
Abstract
The Genome Data Management domain is particularly complex in terms of volume, heterogeneity and dispersion, therefore Information System Engineering (ISE) techniques are strictly required. We work with the Valencian Institute of Agrarian Research (IVIA) to improve its genomic analysis processes. To address this challenge we present in this paper our Model-driven Development (MDD), conceptual modeling-based, experience. The selection of the most appropriate technology is an additional relevant aspect. NoSQL-based solutions where the technology that better fit the needs of the studied domain − the IVIA Research Centre using its Information System in a real-world industrial environment− and therefore used. The contributions of the paper are twofold: to show how ISE in practice provides a better solution using conceptual models as the basic software artefacts, and to reinforce the idea that the adequate technology selection must be the result of a practical ISE-based exercise.
Alberto García S., José Fabián Reyes Román, Juan Carlos Casamayor, Oscar Pastor
A Data Streams Processing Platform for Matching Information Demand and Data Supply
Abstract
Data-driven applications are adapted according to their execution context, and a variety of live data is available to evaluate this contextual information. The BaSeCaaS platform described in this demo paper provides data streaming and adaptation services to the data driven applications. The main features of the platform are separation of information requirements from data supply, model-driven configuration of data streaming services and horizontal scalable infrastructure. The paper describes conceptual foundations of the platform as well as design of data stream processing solutions where matching between information demand and data supply takes please. Light-weight open-source technologies are used to implement the platform. Application of the platform is demonstrated using a winter road maintenance case. The case is characterized by variety of data sources and the need for quick reaction to changes in context.
Jānis Grabis, Jānis Kampars, Krišjānis Pinka, Jānis Pekša
Co-location Specification for IoT-Aware Collaborative Business Processes
Abstract
Technologies from the Internet of Things (IoT) create new possibilities to directly connect physical processes in the ‘real’ world to digital business processes in the administrative world. Objects manipulated in the real world (the ‘things’ of IoT) can directly provide data to digital processes, and these processes can directly influence the behavior of the objects. An increasing body of work exists on specifying and executing the interface between the physical and the digital worlds for individual objects. But many real-life business scenarios require that the handling of multiple physical objects is synchronized by digital processes. An example is the cross-docking of sea containers at ports: here we have containers, ships, cranes and trucks that are all ‘things’ in the IoT sense. Cross-docking processes only work when these ‘things’ are properly co-located in time and space. Therefore, we propose an approach to specify this co-location in multi-actor, IoT-aware business process models, based on the concept of spheres. We discuss consistency checking between co-location spheres and illustrate our approach with container cross-docking processes.
Paul Grefen, Nadja Brouns, Heiko Ludwig, Estefania Serral
Towards Risk-Driven Security Requirements Management in Agile Software Development
Abstract
The focus on user stories in agile means non-functional requirements, such as security, are not always explicit. This makes it hard for the development team to implement the required functionality in a reliable, secure way. Security checklists can help but they do not consider the application’s context and are not part of the product backlog.
In this paper we explore whether these issues can be addressed by a framework which uses a risk assessment process, a mapping of threats to security features, and a repository of operationalized security features to populate the product backlog with prioritized security requirements. The approach highlights the relevance of each security feature to product owners while ensuring the knowledge and time required to implement security requirements is made available to developers. We applied and evaluated the framework at a Dutch medium-sized software development company with promising results.
Dan Ionita, Coco van der Velden, Henk-Jan Klein Ikkink, Eelko Neven, Maya Daneva, Michael Kuipers
Detection and Resolution of Data-Flow Differences in Business Process Models
Abstract
Business process models have an important role in enterprises and organizations as they are used for insight, specification or configuration of business processes. After its initial creation, a process model is very often refined, by different business modelers and software architects in distributed environments, in order to reflect changed requirements or changed business rules. At some point, the different resulting process model versions need to be merged in an integrated version. In order to enable comparison and merging, an approach which comprises difference representation as well as a discovery method for differences is needed. Regarding control-flow, such approaches already exist. As the specification of data-flow is also important, an approach for dealing with data-flow differences is also needed. In this paper, we propose a model for representation of data-flow differences as well as a method able to discover, visualize, and resolve data-flow differences.
Ivan Jovanovikj, Enes Yigitbas, Christian Gerth, Stefan Sauer, Gregor Engels
Modeling Reverse Logistics Networks: A Case Study for 
E-Waste Management Policy
Abstract
Reverse Logistics (RL) groups the activities involved in the return flows of products at the end of their economic life cycle. Enterprises and policy makers all over the world are currently researching, designing and putting in place strategies to recover and recycle products and raw materials, both for the benefit of the environment and to increase profits. However, the management of return flows is complex and unpredictable because consumer behavior introduces uncertainties in timing, quantity, and quality of the end-of-life products. To proactively cope with these concerns, we propose a metamodel that serves as a foundation for a domain specific modeling language (DSML) to understand RL processes and apply analysis techniques. This DSML can also be used to examine aspects such as RL strategies, capacity of the facilities, and incentives (e.g., sanctions and tax reliefs introduced by regulators). The core element of this approach is an extensible metamodel which can be used for analyzing specific applications of RL such as E-waste management.
Paola Lara, Mario Sánchez, Andrea Herrera, Karol Valdivieso, Jorge Villalobos
Using Conceptual Modeling to Support Machine Learning
Abstract
With the transformation of our society into a “digital world,” machine learning has emerged as an essential approach to extracting useful information from large collections of data. However, challenges remain for using machine learning effectively. We propose that some of these can be overcome using conceptual modeling. We examine a popular cross-industry standard process for data mining, commonly known as CRISP-DM Directions, and show the potential usefulness of conceptual modeling at each stage of this process. The results are illustrated through an application to a management system for drug monitoring. Doing so demonstrates that conceptual modeling can advance machine learning by: (1) supporting the application of machine learning within organizations; (2) improving the usability of machine learning as decision tools; and (3) optimizing the performance of machine learning algorithms. Based on the CRISP-DM framework, we propose six research directions that should be explored to understand how conceptual modeling can support and extend machine learning.
Roman Lukyanenko, Arturo Castellanos, Jeffrey Parsons, Monica Chiarini Tremblay, Veda C. Storey
Analyzing User Behavior in Search Process Models
Abstract
Search processes constitute one type of Customer Journey Processes (CJP) as they reflect search (interaction) of customers with an information system or web platform. Understanding the search behavior of customers can yield invaluable insights for, e.g., providing a better search service offer. This work takes a first step towards the analysis of search behavior along paths in the search process models. The paths are identified based on an existing structural process model metric. A novel data-oriented metric based on the number of retrieved search results per search activity is proposed. This metric enables the identification of search patterns along the paths. The metric-based search behavior analysis is prototypically implemented and evaluated based on a real-world data set from the tourism domain.
Marian Lux, Stefanie Rinderle-Ma
User-Centered and Privacy-Driven Process Mining System Design for IoT
Abstract
Process mining uses event data recorded by information systems to reveal the actual execution of business processes in organizations. By doing this, event logs can expose sensitive information that may be attributed back to individuals (e.g., reveal information on the performance of individual employees). Due to GDPR organizations are obliged to consider privacy throughout the complete development process, which also applies to the design of process mining systems. The aim of this paper is to develop a privacy-preserving system design for process mining. The user-centered view on the system design allows to track who does what, when, why, where and how with personal data. The approach is demonstrated on an IoT manufacturing use case.
Judith Michael, Agnes Koschmider, Felix Mannhardt, Nathalie Baracaldo, Bernhard Rumpe
ProcessCity
Visualizing Business Processes as City Metaphor
Abstract
Many organizations are focusing on the digital transformation. To be effective, the organizations need to streamline their own business processes in parallel with digital technology adoption to their businesses. Therefore, one of decisive factors for successful digital transformation is BPM (Business Process Management). Based on data gathered from information systems supporting business processes, the organizations should monitor the business processes on a regular basis, and then update them frequently in order to cope with changes in the business environments. In this paper, we propose ProcessCity, 3D visualization tool, to support the comprehension of complex and large-scale business processes. By analyzing data from the information system related to business processes, ProcessCity visualizes business processes as city metaphor.
Shinobu Saito
Enhancing Big Data Warehousing for Efficient, Integrated and Advanced Analytics
Visionary Paper
Abstract
The existing capacity to collect, store, process and analyze huge amounts of data that is rapidly generated, i.e., Big Data, is characterized by fast technological developments and by a limited set of conceptual advances that guide researchers and practitioners in the implementation of Big Data systems. New data stores or processing tools frequently appear, proposing new (and usually more efficient) ways to store and query data (like SQL-on-Hadoop). Although very relevant, the lack of common methodological guidelines or practices has motivated the implementation of Big Data systems based on use-case driven approaches. This is also the case for one of the most valuable organizational data assets, the Data Warehouse, which needs to be rethought in the way it is designed, modeled, implemented, managed and monitored. This paper addresses some of the research challenges in Big Data Warehousing systems, proposing a vision that looks into: (i) the integration of new business processes and data sources; (ii) the proper way to achieve this integration; (iii) the management of these complex data systems and the enhancement of their performance; (iv) the automation of some of their analytical capabilities with Complex Event Processing and Machine Learning; and, (v) the flexible and highly customizable visualization of their data, providing an advanced decision-making support environment.
Maribel Yasmina Santos, Carlos Costa, João Galvão, Carina Andrade, Oscar Pastor, Ana Cristina Marcén
Business Process Compliance Despite Change: Towards Proposals for a Business Process Adaptation
Abstract
Business Process Compliance (BPC) denotes the execution of business processes in accordance with applicable compliance requirements. BPC can be satisfied through compliance processes that are integrated into the business process. In addition, compliance requirements place demands against IT components that are sometimes necessary to execute business or compliance processes. Various factors, such as outsourcing or business process reengineering can lead to a change of processes or IT components and thus to a violation of BPC. Consequently, our goal is to provide proposals for a business process adaptation to further ensure BPC. Following the design science research methodology, we developed two artifacts to reach our goal. First, we developed a meta-model that represents the interrelations between alternative compliance process patterns and compliance processes that satisfy the same compliance requirement. Second, we developed a method to automatically put forward proposals for a business process adaptation through the integration of alternative compliance processes to further ensure BPC.
Tobias Seyffarth, Stephan Kuehnel, Stefan Sackmann
Detecting and Identifying Data Drifts in Process Event Streams Based on Process Histories
Abstract
Volatile environments force companies to adapt their processes, leading to so called concept drifts during run-time. Concept drifts do not only affect the control flow, but also process data. An example are manufacturing processes where a multitude of machining parameters are necessary to drive the production and might be subject to change due to e.g., machine errors. Detecting such data drifts immediately can help to trigger exception handling in time and to avoid gradual deterioration of the process execution quality. This paper provides online algorithms for concept drift detection in process data employing the concept of process histories. The feasibility of the algorithms is shown based on a prototypical implementation and the analysis of a real-world data set from the manufacturing domain.
Florian Stertz, Stefanie Rinderle-Ma
How Complex Does Compliance Get?
Abstract
Metrics have been applied in software engineering to manage the complexity of program code. This paper explores a new application area of the classic software engineering metrics to determine the complexity of compliance rules in business processes. Despite the critical voices noting the rather weak theoretical foundation, metrics provide effective measures for overlooking the concepts that may drive the complexity of a program. Their scope, scalability, and perceived ease of use do not diffuse these doubts, but provide ample reasons to believe that there is more to complexity analysis than numbers, and that a better methodological approach can help to reveal their true potential. Utilizing this potential would be of great importance, not only for establishing effective and efficient compliance management, but also for providing innovative solutions to digitalization trends and increasing data stacks. While some extant work has shown the applicability of software metrics for analyzing the complexity of process models, metrics have not been applied so far to manage the complexity of compliance rules. The approach presented in this paper provides an integrated view on the complexity of compliance rules that are modeled with conceptually different compliance languages. To this end, we review and discuss the literature on software metrics to derive the definitions needed to compute the complexity of compliance rules, and to refurbish the methodological foundation of software engineering metrics.
Andrea Zasada
Backmatter
Metadata
Title
Information Systems Engineering in Responsible Information Systems
Editors
Dr. Cinzia Cappiello
Marcela Ruiz
Copyright Year
2019
Electronic ISBN
978-3-030-21297-1
Print ISBN
978-3-030-21296-4
DOI
https://doi.org/10.1007/978-3-030-21297-1

Premium Partner