Skip to main content
Top

2019 | Book

Business Information Systems Workshops

BIS 2018 International Workshops, Berlin, Germany, July 18–20, 2018, Revised Papers

insite
SEARCH

About this book

This book constitutes revised papers from the seven workshops and one accompanying event which took place at the 21st International Conference on Business Information Systems, BIS 2018, held in Berlin, Germany, in July 2018. Overall across all workshops, 58 out of 122 papers were accepted.

The workshops included in this volume are:AKTB 2018 - 10th Workshop on Applications of Knowledge-Based Technologies in Business
BITA 2018 - 9th Workshop on Business and IT Alignment
BSCT 2018 - 1st Workshop on Blockchain and Smart Contract Technologies
IDEA 2018 - 4th International Workshop on Digital Enterprise Engineering and Architecture
IDEATE 2018 - 3rd Workshop on Big Data and Business Analytics Ecosystems
SciBOWater 2018 - Scientific Challenges & Business Opportunities in Water Management
QOD 2018 - 1st Workshop on Quality of Open Data
In addition, one keynote speech in full-paper length and contributions from the Doctoral Consortium are included

Table of Contents

Frontmatter

Keynote Speech

Frontmatter
Business Information Systems for the Cost/Energy Management of Water Distribution Networks: A Critical Appraisal of Alternative Optimization Strategies

The objective of this paper is to show how smart water networks enable new strategies for the energy cost management of the network, more precisely Pump Scheduling Optimization. This problem is traditionally solved using mathematical programming and, more recently, nature inspired metaheuristics. The schedules obtained by these methods are typically not robust both respect to random variations in the water demand and the non-linear features of the model. The authors consider three alternative optimization strategies: (i) global optimization of black-box functions, based on a Gaussian model and the use of the hydraulic simulator (EPANET) to evaluate the objective function; (ii) Multi Stage Stochastic Programming, which models the stochastic evolution of the water demand through a scenario analysis to solve an equivalent large scale linear program; and finally (iii), Approximate Dynamic Programming, also known as Reinforcement Learning. With reference to real life experimentation, the last two strategies offer more modeling flexibility, are demand responsive and typically result in more robust solutions (i.e. pump schedules) than mathematical programming. More specifically, Approximate Dynamic Programming works on minimal modelling assumption and can effectively leverage on line data availability into robust on-line Pump Scheduling Optimization.

Antonio Candelieri, Bruno G. Galuzzi, Ilaria Giordani, Riccardo Perego, Francesco Archetti

AKTB Workshop

Frontmatter
Enhancing Clinical Decision Support Through Information Processing Capabilities and Strategic IT Alignment

Hospitals heavily invest in Healthcare IT to improve the efficiency of hospital operations, improve practitioner performance and to enhance patient care. The literature suggests that decision support systems may contribute to these benefits in clinical practice. By building upon the resource-based view of the firm (RBV), we claim that hospitals that invest in a so-called information processing capacity (IPC)—the ability to gather complete patient data and information and enhance clinical processes—will substantially enhance their clinical decision support capability (CDSC). After controlling for common method bias, we use Partial Least Squares SEM to analyze our primary claim. Following the resource and capability-based view of the firm, we test our hypotheses on a cross-sectional data sample of 720 European hospitals. We find that there is a positive association between a hospital’s IPC and clinical decision support capability (CDSC). IT alignment moderates this relationship. All included control variables showed nonsignificant results. Extant research has not been able to identify those IT-enabled capabilities that strengthen CDSC in hospital practice. This study contributes to this particular gap in the literature and advances our understanding of how to efficaciously deploy CDSC in clinical practice.

Rogier van de Wetering
Ontology-Based Fragmented Company Knowledge Integration: Possible Approaches

Companies have multiple business process, some of which are supported by knowledge described via ontologies. However, due to their nature, the processes use different knowledge notation what causes a problem of integrating such fragmented heterogeneous knowledge. The paper investigates the problem of developing a single multi-domain ontology for integrating company knowledge taking into account differences between terminologies and formalisms used in various business processes. Different options of designing ontologies covering multiple domains are considered. Three of them: (i) ontology localization/multilingual ontologies, (ii) granular ontologies, and (iii) ontologies with temporal logics are considered in details and analyzed.

Alexander Smirnov, Nikolay Shilov
Enhancing Teamwork Behavior of Services

Nowadays, many large software systems that are developed for business are mainly built from services leveraging the benefits of interoperability. However, the development of new technologies such as Cloud Computing, Internet of Things and Cyber Physical Systems create additional concerns that claim to be integrated into the existing approaches of modeling services, their interaction and cooperation. In this research, we propose web services to automatically cooperate using the role modeling approach by enhancing service’s interoperability through novel service teamwork roles. Teamwork contribution to the organizational performance has tracked attention of various research groups from several disciplines. In this direction, we contribute by determining the dominant teamwork roles that prevail during service group cooperation, link them with fifteen major teamwork factors that are recognized in agent-based teamwork, and indicate their primary teamwork behavior. A simulation using Monte Carlo presents results about how teamwork roles could affect and benefit the service cooperation.

Paraskevi Tsoutsa, Panos Fitsilis, Omiros Ragos
Business Rule Optimisation: Problem Definition, Proof-of-Concept and Application Areas

Business rules have been applied to a wide range of manufacturing and services organisations. Decisions around quality control, customer acceptance, and warranty claims are typical applications in day-to-day operation. They all have two things in common; there are multiple assessment criteria such as profit, revenue, and customer satisfaction, and the quality of the decisions made have an impact on the performance and sustainability of the organisation. This paper presents a solution to the novel problem of optimising the structure and parameters of automated business rules where there is the possibility to refer some decisions to a human expert. The difference here is that although the business rules are deterministic and repeatable, human decisions are generally neither. This research problem is multi-disciplinary, and the solution comprises elements of business process management, mathematical optimisation, simulation, machine learning, probability, and psychology. The paper describes a potential solution, some initial results when applied to a problem in the financial services sector and identifies further areas of application.

Alan Dormer
Profiling User Colour Preferences with BFI-44 Personality Traits

Nowadays, a lot of attention is paid to personalisation of services and content presented to a user. Personalisation is based on profiles of a users built on top of diverse data: logs, texts, pictures, etc. The goal of the paper is to analyse a connection between a type of user’s personality and his colour preferences, to enable for personalisation. To reach this goal, correlations between outcomes of BFI-44 Personality Traits and colour preferences inspired by Plutchik’s Wheel of Emotions for individual users were analysed. 144 respondents had been surveyed with a questionnaire to enable the analysis. The results were analysed using linear models for different personality traits. Outcomes, together with their quality assessment, are presented in the paper.

Magdalena Wieloch, Katarzyna Kabzińska, Dominik Filipiak, Agata Filipowska
The Benefits of Modeling Software-Related Exceptional Paths of Business Processes

A business process is a collection of activities leading to increase customer’s satisfaction. With organizations depending on the software’s reliability, this satisfaction might be dramatically reduced with the growing number of failures that might harm the business process related to the client. In this paper, we identify the reasons for organizations to model their business processes including exceptional paths. We begin with the summary of existing research on benefits of business process modeling. Then we analyze current status in the subject of techniques and tools for mapping exceptional paths in process. By carrying out a case study, we verify what conditions should be met for the organization to gain benefit from visualizing the process paths taken as a result of software failure occurrence.

Krzysztof Gruszczyński, Bartosz Perkowski
Process Mining of Periodic Rating Scale Survey Data Using Analytic Hierarchy Process

The main purpose of our research is to propose original algorithm to evaluate the dynamic behavior of processes from the survey data collected with the help of periodically repeated surveys based on Likert scale questions. This approach supposes the usage of AHP (Analytic Hierarchy Process) for assessment the factors influencing the process behavior. Our idea is to use the aggregated periodic rating scale data as alternatives inputs for AHP evaluation. The practical usefulness of proposed process quality evaluation technique was proved by examining particular Polish rehabilitation hospital service quality changes over time frame from 2008 to 2017.

Dalia Kriksciuniene, Virgilijus Sakalauskas, Roman Lewandowski

BITA Workshop

Frontmatter
Determinants to Benefit from Enterprise Architecture Management – A Research Model

A successful digital transformation in enterprises requires surpassing infrastructural flexibility within firms and high IT competency to accomplish changing business requirements. Digital Enterprises are challenged to combine business and IT to gain from existing technological achievements. Previous studies showed that there are certain factors influencing the benefit of Enterprise Architecture Management. However, there are some more influencing factors due to the digital transformation that were not taken into consideration yet. An alternative research approach investigates more factors and helps to get a deeper insight of impact factors. This paper draws on a first approach to investigate additional factors and their impact on EAM. The approach is based on a profound literature research in order to build a new empirical research model. In addition, the indicators were examined in a case of industrial digital transformation. It is shown that factors aggregated to the determinants IT Landscapes, internal as well as external Business Environments and the level of EAM Establishment have substantially impact on the benefit of EAM in enterprises.

Ralf-Christian Härting, Christopher Reichstein, Kurt Sandkuhl
Business and Information Technology Alignment Measurement - A Recent Literature Review

Since technology has been involved in the business context, Business and Information Technology Alignment (BITA) has been one of the main concerns of IT and Business executives and directors due to its importance to overall company performance, especially today in the age of digital transformation. Several models and frameworks have been developed for BITA implementation and for measuring their level of success, each one with a different approach to this desired state. The BITA measurement is one of the main decision-making tools in the strategic domain of companies. In general, the classical-internal alignment is the most measured domain and the external environment evolution alignment is the least measured. This literature review aims to characterize and analyze current research on BITA measurement with a comprehensive view of the works published over the last 15 years to identify potential gaps and future areas of research in the field.

Leonardo Muñoz, Oscar Avila
Using Business Process Modelling to Improve Student Recruitment in UK Higher Education

We consider how the student recruitment process might be improved to optimize performance with particular reference to the clearing process. A Design Science Research (DSR) methodology was used which entails learning through artefact production and data was collected from interviews, observation and document analysis. The logic of the clearing process was modelled using a process-oriented modelling technique. An ‘As Is’ clearing process model was created to analyze the process, and a ‘To Be’ clearing process model developed. The improved model has been verified by domain experts and promises to enhance the clearing process in terms of cost saving and resource utilization.

Oluwatoyin Fakorede, Philip Davies, David Newell
Stakeholder-Oriented and Enterprise Architecture Driven Cloud Service Selection

In the last decade the number of cloud services has grown significantly. However, it is still a challenge for enterprises to describe functional requirements in a user-friendly way in order to select cloud services. This study introduces an approach to increase the practical relevance of business IT alignment research. We integrate insights from organizational buying behaviour into enterprise architecture modeling. The result consists of a concept and web-based platform. It enables business users without expert knowledge in modeling to exploit the potential of enterprise architecture for cloud service selection.

Sabrina Kurjakovic, Knut Hinkelmann
Business-IT Alignment Improvement in Co-creation Value Networks: Design of a Reference Model-Based Support

Prior research has not adequately addressed business-IT alignment (BITA) improvement, especially in a business network situation of a co-creation value network (VN). In a VN setting, IT is regarded as a major facilitator of actors’ collaboration to realize their joint objectives, i.e. to deliver seamless customer experience through providing mass-customized integrated solutions. To effectively use IT, a sufficient degree of BITA for key capabilities of a VN is required. Furthermore, BITA as a moving target should be improved continuously over time.In this paper, BITA improvement in a VN setting is studied. We focus on BITA improvement for the key capabilities of a VN and design support for it. To this end, we adopt a dynamic capability perspective due to its ability to explain how organizations can improve their operational capabilities and processes to adjust to a changing environment. We design a reference model-based approach that enhances the ‘business process management’ dynamic capability of a VN by enabling co-development of business processes with their supporting IT-based systems. This co-development facilitates BITA improvement. This paper presents the research process of the design of our support. As a proof of concept, the results for one of the key capabilities of a VN (i.e., customer understanding) is presented and discussed.

Samaneh Bagheri, Rob Kusters, Jos Trienekens, Paul W. P. J. Grefen
Ontology Development Strategies in Industrial Contexts

Knowledge-based systems are used extensively to support functioning of enterprises. Such systems need to reflect the aligned business-IT view and create shared understanding of the domain. Ontologies are used as part of many knowledge-bases systems. The industrial context affects the process of ontology engineering in terms of business requirements and technical constraints. This paper presents a study of four industrial cases that included ontology development. The study resulted in identification of seven factors that were used to compare the industrial cases. The most influential factors were found to be reuse of ontologies/models, stakeholder groups involved, and level of applicability of ontology. Finally, four recommendation were formulated for projects intended to create shared understanding in an enterprise.

Vladimir Tarasov, Ulf Seigerroth, Kurt Sandkuhl

BSCT Workshop

Frontmatter
Blockchain Backed DNSSEC

The traditional Domain Name System (DNS) does not include any security details, making it vulnerable to a variety of attacks which were discovered in 1990. The Domain Name System Security Extensions (DNSSEC) attempted to address these concerns and extended the DNS protocol to add origin authentication and message integrity whilst remaining backwards compatible. Yet despite the fact that issues with DNS have been well known since the late 90s, there has been very little adoption of DNSSEC. This paper proposes a new system using blockchain technology. Our system aims to provide the same security benefits as DNSSEC whilst addressing the concerns that led to its slow adoption.

Scarlett Gourley, Hitesh Tewari
The Proposal of a Blockchain-Based Architecture for Transparent Certificate Handling

Diplomas have high importance in society since they serve as official proofs for education. Therefore, it is not surprising that forgeries of such documents have become commonplace. Thus, employers ordinarily have the diplomas manually verified by the issuer. Blockchain creates opportunities to overcome these obstacles, as it has revolutionized the way in which people interact with each other. Based on this, a holistic solution that includes issuance and verification of diplomas can be realized. This paper presents a proposal of a blockchain based system for managing diplomas called UZHBC (University of ZuricH BlockChain).

Jerinas Gresch, Bruno Rodrigues, Eder Scheid, Salil S. Kanhere, Burkhard Stiller
Blockchain-Based Distributed Marketplace

Developments in Blockchain technology have enabled the creation of smart contracts; i.e., self-executing code that is stored and executed on the Blockchain. This has led to the creation of distributed, decentralised applications, along with frameworks for developing and deploying them easily. This paper describes a proof-of-concept system that implements a distributed online marketplace using the Ethereum framework, where buyers and sellers can engage in e-commerce transactions without the need of a large central entity coordinating the process. The performance of the system was measured in terms of cost of use through the concept of ‘gas usage’. It was determined that such costs are significantly less than that of Amazon and eBay for high volume users. The findings generally support the ability to use Ethereum to create a distributed on-chain market, however, there are still areas that require further research and development.

Oliver R. Kabi, Virginia N. L. Franqueira
BlockChain Based Certificate Verification Platform

This research is still in progress and focuses on using the blockchain for the verification of the authenticity of issued certificates. In this first stage of the research we are presenting a prototype that allows the registration of academic institutions and its respective institutes/faculties, registration of student cohorts and issuing of certificate awards. The issued certificates are registered on the blockchain so that any third party who would need to verify the authenticity of a certificate can do so, independently of the academic institution, even in the event that such institution has closed. The next phase of this research aims to extend the prototype to the registration of medical records on the blockchain with a focus on the privacy of sensitive data and allowing the owner of the information to control user access to the documents. The final phase of the research will involve gathering user and corporate feedback on the proposed prototypes.

Axel Curmi, Frankie Inguanez
Blockchain-Based Management of Shared Energy Assets Using a Smart Contract Ecosystem

Energy markets are facing challenges regarding a changing energy generation and consumption structure, as well as the coordination of an increasing number of assets, devices and stakeholders. We address these challenges by introducing a blockchain-based smart contract ecosystem as our contribution to extant research. Apart from blockchain-specific benefits (e.g. data integrity and smart contract execution), the ecosystem fosters energy-blockchain research through the creation of digital assets. Doing so, we address research gaps identified by previous authors. From our work, we can derive economic implications regarding the foundation of local energy markets, the incentivization of grid-stabilizing behavior and the settlement of collective action problems.

Manuel Utz, Simon Albrecht, Thorsten Zoerner, Jens Strüker
Research of Ethereum Mining Hash Rate Dependency on GPU Hardware Settings

Cryptocurrency mining with GPUs for profit has a fine edge of finding the best rate for mining power versus energy consumption. In this research we explore different GPU settings, namely memory clock and core voltage influence on mining performance measured as hash rate. Additionally, we look for opportunities to lower power consumption. Experiment using combined power of five commonly available graphics cards showed improvement of hash rate with increased memory clock frequencies, and lower power consumption with decreased core voltages. However, these dependencies are not linear, and some other factors, like different memory chips on otherwise similar graphics card models may give contradicting results.

Paulius Danielius, Tomas Savenas, Saulius Masteika
Practical Deployability of Permissioned Blockchains

Ever since the evolution of cryptocurrencies, there has been profound interest in employing the underlying Blockchain technology for enterprise applications. Enterprises are keen on embracing the advantages of Blockchain in applications ranging from FinTech, Supply chain, IoT, Identity Management, Notary, Insurance and to many other domains. Blockchain is often spoken of as the third disruption after computers and the internet, and is being studied for application in several domains. A blockchain, as used in most cryptocurrencies, does not require any authorization for participants to join or leave the system, and hence is referred to as a permission-less blockchain. However, enterprise applications cannot operate in such models. Enterprise applications operate in a regulated, permissioned blockchain setting. This paper provides an industry focused insight into the practicality and feasibility of permissioned blockchains in real-world applications. In particular, we consolidate some non-trivial challenges that should be addressed in making the permissioned blockchain practically deployable in enterprises.

Nitesh Emmadi, R. Vigneswaran, Srujana Kanchanapalli, Lakshmipadmaja Maddali, Harika Narumanchi
Decentralized Energy Networks Based on Blockchain: Background, Overview and Concept Discussion

This paper provides a snapshot of the globally ongoing decentralization of (business) relations in the energy sector. This tendency can be observed in other domains as well and is accompanied by new digital technological developments. Blockchain technology is assigned disruptive potential when it comes to realize those decentralization ideas. This hype about Blockchain is mainly company-driven without a solid academic basis yet. The authors are currently involved in several research efforts for utilizing distributed energy resources like photovoltaic systems, batteries and electric cars for the setup of energy communities and marketplaces. The paper, therefore, presents detailed investigations of background and motivations for decentralization and the building of (local) energy communities and (peer-to-peer) marketplaces for sustainable utilization of renewable energies. An overview of recent related Blockchain-based works is presented, and the current state and feasibility for the realization of the envisioned decentralized solutions are discussed. In this way, the work aimed at contributing to a research-based decision foundation for upcoming Blockchain-based decentralization efforts.

Mario Pichler, Marcus Meisel, Andrija Goranovic, Kurt Leonhartsberger, Georg Lettner, Georgios Chasparis, Heribert Vallant, Stefan Marksteiner, Hemma Bieser
Enabling Data Markets Using Smart Contracts and Multi-party Computation

With the emergence of data markets, data have become an asset that is used as part of transactions. Current data markets rely on trusted third parties to manage the data, creating single points of failure with possibly disastrous consequences on data privacy and security. The lack of technical solutions to enforce strong privacy and security guarantees leaves the data markets’ stakeholders (e.g., buyers and sellers of data) vulnerable when they transact data. Smart Contracts and Multi-Party Computation represent examples of emerging technologies that have the potential to guarantee the desired levels of data privacy and security. In this paper, we propose an architecture for data markets based on Smart Contracts and Multi-Party Computation and present a proof of concept prototype developed to demonstrate the feasibility of the proposed architecture.

Dumitru Roman, Kien Vu
Opportunities, Challenges, and Future Extensions for Smart-Contract Design Patterns

Blockchains enable the trustless establishment of long-term consensus. The primary paradigm for extending this capability to generalized use cases is smart contracts. Smart contracts have the advantages of trustlessness, immutability, transparency, censorship-resistance, and DDoS resistance, but suffer from immutability, chain-boundedness, high cost of storage and execution, and poor parallelizability. While the advantages of smart contracts create many opportunities, their unique properties impose important constraints. A suite of design patterns are therefore proposed as one methodology for addressing these constraints while taking full advantage of the opportunities that smart contracts provide.

Carl R. Worley, Anthony Skjellum
A Multichain Architecture for Distributed Supply Chain Design in Industry 4.0

The Fourth Industrial Revolution is centered around a self-organized production. Cyber-physical systems can be used to collect and share data for tracking, automation and control as well as evaluation and documentation. This has a wide-reaching impact on business models. Instead of a linear approach, concentrating on a single company, the focus needs to be on the complete production ecosystem represented by a dynamic and self-organized network, including the supply chain. The interconnectedness of the cyber-physical systems in these networks is rooted in the Internet of Things, but also shared business processes. Performance, stability, security as well as data integrity and access control hereby represent a major contributing factor, calling for a new concept of information processing systems.The blockchain is a distributed ledger combining cryptographic and game-theoretic concepts, which enable immutable transactions and automatic consensus of the parties involved about its state. Blockchains have evolved as a fast-developing technology, promising increased efficiency and security in many scenarios, especially use cases that primarily rely on all kinds of transactions.The paper follows a design science approach, examining the implications of blockchain and the industrial Internet of Things in Industry 4.0 (I40) on supply chain management. I40’s implications on supply chains are discussed and connected with favorable characteristics of blockchain technology. Based on this analysis, requirements for a decentralized enterprise information processing system are derived, resulting in a reference model for distributed supply chains of I40.

Kai Fabian Schulz, Daniel Freund
Invisible BlockChain and Plasticity of Money – Adam Smith Meets Darwin to Buy Crypto Currency

This paper will attempt to compare evolution in nature (based on Darwin’s theory of evolution by natural selection), with similar processes happening in money, especially in the current iteration of money (cryptocurrencies), and the underlying blockchain technology. This paper will review the history and evolution of money up to this point, shortly explain Darwin’s theory of evolution by natural selection, compare the similarities between evolution by natural selection in nature with evolution in blockchain technology and cryptocurrencies, and will examine the potential paths for the blockchain technology and cryptocurrencies to evolve in the future.

Zeeshan-ul-hassan Usmani, Andre Waddell, Rytis Bieliauskas
Blockchain-Based Internet Voting: Systems’ Compliance with International Standards

Blockchain has emerged as the technology claiming to change the way services are delivered nowadays, from banking to public administrations. The field of elections, and specifically internet voting, is not an exception. As a distributed audit layer, it is expected that blockchain technology will provide more transparency to such services, while preventing that a simple agent can tamper with electoral electronic data. Elections require a challenging combination of privacy and integrity. However, a system that would store ballots in clear on a public blockchain, while transparent, would not comply with secret suffrage, one key principle for democratic elections. For this reason, the motivation of this paper is to assess the potential of blockchain technology in internet voting. To this end we analyse several blockchain-based internet voting systems and their degree of compliance against a set of commonly accepted properties of internet voting, namely those of the Council of Europe. This has allowed us to identify a set of common features and challenges on how blockchain can contribute to the conduct of e-enabled elections.

Jordi Cucurull, Adrià Rodríguez-Pérez, Tamara Finogina, Jordi Puiggalí
Chaining Property to Blocks – On the Economic Efficiency of Blockchain-Based Property Enforcement

Within the last two years, much has been written about the blockchain and distributed ledger technologies. However, few actual use cases related to real world phenomena have been proffered in that literature. Most applications remain entirely in the virtual world or their descriptions remain on a very abstract and speculative level. In this paper we study one possible application on the powers of blockchain technology to real-world problems. In particular, we study the economic feasibility, effectiveness and efficiency of blockchain-based registries for property of chattel and the technical enforcement of the rights listed in such registries. For the example of smartphones, we show that their, and their owners’ registration in a blockchain may achieve a most desirable results from registration, theft becomes less attractive. An additional advantage, which we also briefly touch on, is that the use of smartphones as collateral without possession may become possible. We study under what conditions the benefits from registration is feasible in a blockchain-based distributed ledger and why they are not implemented under less complex technologies such as registries owned and administered by producers of smartphones.

Janina da Costa Cruz, Aenne Sophie Schröder, Georg von Wangenheim
On the Future of Markets Driven by Blockchain

When an effort is made to understand how the blockchain may influence the evolution of economies, a distinction between marketing promises and neutral assessments is required. This work aims to review the prospective development of three key economic principles under the assumed constitution of a blockchain future. It is analyzed how the concepts of intermediation, economic transparency and economic automation show characteristics under possibly disruptive impact. Their present situation is laid out and extrapolated to gain a reasoned insight into economic development driven by blockchain technology as currently promised. An argument is made that the consumers view of innovative services will be coined by nescience towards the actual data structure in use.

Mario Cichonczyk
Smart Contract-Based Role Management on the Blockchain

Role-based access management is essential in today’s business applications. The need for such access control is indisputable, implementation in a centralized way, on the other hand, is not ideal. An improvement could be a decentralized, Smart-Contract-based approach. This paper examines whether corporate applications can use distributed ledger based authorization systems to benefit from the positive properties of blockchain technology, without losing the possibilities and strengths of existing central authorization techniques. The benefit of a prototype with a decentralized approach is to serve as a basis for future decentralized company developments. This paper deals with the implementation and validation of a blockchain-based access control solution for decentralized applications. The feasibility of this on-chain solution for role-based access control (RBAC) is verified through a proof-of-concept using a suitable distributed ledger platform.The implementation of the authorization system aims to fulfill the evaluation requirements and does not claim to be used as a corporate service.

Cornelius Ihle, Omar Sanchez
Industrial Socio-Cyberphysical System’s Consumables Tokenization for Smart Contracts in Blockchain

As a result of the development of the Industry 4.0 concept, the physical characteristics of production and the production process were transferred to the digital form. This process is also known as production digitalization. Digitalization allows creating so-called digital twins of production components, among which are smart objects, including machines (production robots), people, software services, processed materials and manufactured products. Digital twins allow to display not only the current state of the component, but also predict its actions according the current situation by creating behavior patterns. The interaction of digital twins is carried out through the cloud platform of the Internet of Things using an ontological representation to ensure interoperability. Interaction via cloud IoT platform causes problems associated with providing trust in the distribution of consumables between components of the industrial IoT system. One of the recent solutions to such problems is using of blockchain technology. In this paper, an example of the integration of industrial IoT and blockchain technology is presented. Each component of the system is represented by a corresponding digital twin, capable of interacting on-the-fly with ontologies in the IoT and blockchain network. To solve the problem of control and exchange of consumable resources in IIoT, this paper proposes classification and principles of resource tokenization depending on their class. The tokenization has been checked by creating smart contracts for consumables allocation on Hyperledger Fabric platform.

Nikolay Teslya
SmartExchange: Decentralised Trustless Cryptocurrency Exchange

Trading cryptocurrency on current digital exchange platforms is a trust-based process, where the parties involved in the exchange have to fully trust the service provider. As it has been proven several times, this could lead to funds being stolen, either due to malicious service providers that simply disappear or due to hacks that these platforms might suffer. In this work, we propose and develop a decentralised exchange solution based on smart contracts running on the Ethereum network that is open, verifiable, and does not require trust. The platform enables two parties to trade different currencies, limited to Ethereum and Bitcoin in the current status of the system. A smart contract, deployed on the Ethereum blockchain, functions as an escrow, which holds a user’s funds until a verified transaction has been made by the other party. To make the smart contract able to detect a Bitcoin transfer, we implement our solution by utilising an oracle. We define the system architecture and implement a working platform, which we test in a model scenario, successfully exchanging Bitcoin and Ether on the blockchain test networks. We conclude the paper identifying possible challenges and threats to such a system.

Filip Adamik, Sokol Kosta
A Public, Blockchain-Based Distributed Smart-Contract Platform Enabling Mobile Lite Wallets Using a Proof-of-Stake Consensus Algorithm

Blockchain-enabled smart contracts that employ proof-of-stake validation for transactions, promise significant performance advantages compared to proof-of-work solutions. For broad industry adoption, other important requirements must be met in addition. For example, stable backwards-compatible smart-contract systems must automate cross-organizational information-logistics orchestration with lite mobile wallets that support the unspent transaction output (UTXO) protocol and simple payment verification (SPV) techniques. The currently leading smart-contract solution Ethereum, uses computationally expensive proof-of-work validation, is expected to hard-fork multiple times in the future and requires downloading the entire blockchain. Consequently, Ethereum smart contracts have limited utility for large industry applications. This paper fills the gap in the state of the art by presenting the Qtum smart-contract framework that allows for managing transaction headers in lite mobile wallets in addition with using a proof-of-stake (PoS) consensus algorithm.

Alex Norta, Patrick Dai, Neil Mahi, Jordan Earls
Risk Engineering and Blockchain: Anticipating and Mitigating Risks

Complex systems require an integrated approach to risks. In this paper, we describe risk engineering, a methodology to incorporate risks at the planning and design stage for complex systems, and introduce some of its components. We examine, at a high level, how risk engineering can help improve the risk picture for blockchain technologies and their applications and outline challenges and benefits of this approach.

Michael Huth, Claire Vishik, Riccardo Masucci

IDEA Workshop

Frontmatter
IT Infrastructure Capability and Health Information Exchange: The Moderating Role of Electronic Medical Records’ Reach

This research investigates the hypothesized relationship between a hospital’s IT infrastructure capability and the degree to which hospital can exchange health information. Enhanced information exchange within and between hospitals is currently considered to be critical for modern hospital operations in the big data era. In this research, we build on the resource-based view of the firm to position the deployment and usage of IT capabilities as a unique, valuable, appropriable and difficult-to-imitate resource of value for hospitals. Following the resource-based view of the firm, this study argues IT is a strategic source of value for hospitals. Guided by our research model we test two related hypotheses using Partial least squares (PLS)-based Structural Equation Modeling (SEM) on a large-scale cross-sectional dataset of 1155 European hospitals. Results show that IT infrastructure capability is a crucial antecedent of health information exchange. Finally, we found that the degree to which hospitals deploy hospital-wide systems that electronically maintain and share health data and information, i.e., Electronic Medical Records, influences the strength of this particular relationship. These particular findings suggest that although IT investments in hospitals continue to grow, IT plans and strategies to enable health information exchange will require ongoing attention. Hence, our research provides valuable insights into how IT can be targetted and exploited to support capabilities in clinical practice. Specifically, we demonstrate the conditions under which hospitals can leverage their IT resources to enhance levels of patient and health information exchange within the hospital and beyond its boundaries.

Rogier van de Wetering
The IT Department as a Service Broker: A Qualitative Research

The accelerated increase of new IT suppliers and services is allowing organizations to easily access to specialized outsourced services. Consequently, IT departments are increasingly developing fewer IT services and relying more on external providers to satisfy the needs of their customers. This new context requires a change in the IT function, which should move from its traditional role of service builder and operator to a new role of service integrator and broker. However, IT managers do not know in most of the cases which are the capabilities and IT expert roles and skills required in the IT area to implement this new role. To cope with this lack, we propose a management model built in two steps: (i) a review of the literature that help us to identify and analyze the current contributions in the IT service brokering area, and (ii) a qualitative research that includes a focus group and a survey to IT professionals in order to validate the findings made in the literature review. The presented model aims at establishing the basis for a complete approach to help IT organizations to adopt the IT service broker role.

Linda Rodriguez, Oscar Avila
Foster Strategic Orientation in the Digital Age
A Methodic Approach for Guiding SME to a Digital Transformation

The goal of this paper is to demonstrate the need for a structured framework and instrument to support SME in adjusting their organizational strategy to environmental changes in the digital age. Since SME in particular face many challenges transforming their business, a structured and easy to use approach that takes into account the challenges and opportunities of an SME such as limited resources or high flexibility offer a high potential for success. A framework-based method named Transformation Compass extends previous instruments in including all relevant business aspects of an enterprise such as marketing, processes, change management or innovation and allowing a “thinking outside the box” approach while the impacts of the digital age are seen as digital enabler. Combing the different aspects of an enterprise based on basic principles of business management, the instrument is divided into four main building blocks that are Customer Centricity, Business Models, Operational Excellence and Organizational Excellence. Hence, the Transformation Compass serves as a starting point for strategic discussion and possible reorientation of the company.

Manuela Graf, Marco Peter, Stella Gatziu-Grivas
TEA - A Technology Evaluation and Adoption Influence Framework for Small and Medium Sized Enterprises

Emerging technologies compel small and medium sized enterprises (SMEs) to advance their digital transformation. However, a conclusive and applicable overview on influencing factors for the evaluation and adoption of new technologies, on a sensitizing level, is nonexistent. Previous work has focused on adoption frameworks on an implementation level, disregarding the interconnectedness with evaluation and an appropriate application for SMEs. To empower SMEs to develop a transformation strategy considering these influencing factors, the Technology Evaluation and Adoption Influence (TEA) Framework has been designed. It covers nine influence factors operating from the external and internal company environment. To determine the factors, 56 insurance brokers distributed in Switzerland, South Africa and Turkey were interviewed and existing frameworks were analyzed. The design process went through three iterations involving experts for verification and testing. Within a field test with an expert user, the framework proved its conclusiveness, applicability, and significance for SMEs.

Dominic Spalinger, Stella Gatziu Grivas, Andre de la Harpe

iDEATE Workshop

Frontmatter
Prescriptive Analytics: A Survey of Approaches and Methods

Data analytics has gathered a lot of attention during the last years. Although descriptive and predictive analytics have become well-established areas, prescriptive analytics has just started to emerge in an increasing rate. In this paper, we present a literature review on prescriptive analytics, we frame the prescriptive analytics lifecycle and we identify the existing research challenges on this topic. To the best of our knowledge, this is the first literature review on prescriptive analytics. Until now, prescriptive analytics applications are usually developed in an ad-hoc way with limited capabilities of adaptation to the dynamic and complex nature of today’s enterprises. Moreover, there is a loose integration with predictive analytics, something which does not enable the exploitation of the full potential of big data.

Katerina Lepenioti, Alexandros Bousdekis, Dimitris Apostolou, Gregoris Mentzas
Challenges from Data-Driven Predictive Maintenance in Brownfield Industrial Settings

In the last years many companies made substantial investments in digitization of production and started collecting a lot of data. However, the big question arises how to make sense of all these data and to create competitive advantage? In this regard maintenance is an ever-urged topic and seems to be a low hanging fruit to realize benefits from analyzing large amounts of sensor data now available. This is however, very challenging in typical industrial environments where we can find a mixture of old and new production infrastructure, called brownfield environment. In this work in progress paper we want to investigate this context and identify challenges for the introduction of Big Data approaches for predictive maintenance. For this purpose, we conducted a case study with a world reputed electronic components company. We found that making sense out of sensor data and finding the right level of detail for the analysis is very challenging. We developed a feedback app to incorporate the employees’ domain knowledge in the sense making process.

Georgios Koutroulis, Stefan Thalmann
Big Data is Power: Business Value from a Process Oriented Analytics Capability

Big data analytics (BDA) has the potential to provide firms with competitive benefits. Despite its massive potential, the conditions and required complementary resources and capabilities through which firms can gain business value, are by no means clear. Firms cannot ignore the influx of data, mostly unstructured, and will need to invest in BDA increasingly. By doing so, they will have to, e.g., necessitate new specialist competencies, privacy, and regulatory issues as well as other structural and cost considerations. Past research contributions argued for the development of idiosyncratic and difficult to imitate firm capabilities. This study builds upon resources synchronization theories and examines the process to obtain business value from BDA. In this study, we use data from 27 cases studies from different types of industries. Through the coding analyses of interview transcripts, we identify the contingent resources that drive, moderate and condition the value of a BDA capability throughout different phases of adoption. Our results contribute to a better understanding of the importance of BDA resources and the process and working mechanisms through which to leverage them toward business value. We conclude that our synthesized configurational model for BDA capabilities is a useful basis for future research.

Rogier van de Wetering, Patrick Mikalef, John Krogstie

SciBOWater Workshop

Frontmatter
A Multiple-Layer Clustering Method for Real-Time Decision Support in a Water Distribution System

Machine learning provides a foundation for a new paradigm where the facilities of computing extend to the level of cognitive abilities in the form of decision support systems. In the area of water distribution systems, there is an increased demand in data processing capabilities as smart meters are being installed providing large amounts of data. In this paper, a method for multiple-layer data processing is defined for prioritizing pipe replacements in a water distribution system. The identified patterns provide relevant information for calculating the associated priorities as part of a real-time decision support system. A modular architecture provides insights at different levels and can be extended to form a network of networks. The proposed clustering method is compared to a single clustering of aggregated data in terms of the overall accuracy.

Alexandru Predescu, Cătălin Negru, Mariana Mocanu, Ciprian Lupu, Antonio Candelieri
Automated Updating of Land Cover Maps Used in Hydrological Modelling

Urbanization and rapid growth in population are common development in many catchments and flood plains, often leading to increased flood risks. In hydrological models of regions with urban spread out, the parameters representing changes in land cover need to be frequently updated for obtaining better estimations of the discharge (runoff). This article presents an automated method for incorporating updated model parameters (SCS curve numbers) as per new land cover maps, and using them in a hydrological model. The presented work is developed for one part of Kifisios catchment in Greece, which is a pilot area of the Horizon 2020 SCENT research project ( https://scent-project.eu/ ), focused on producing updated land use/land cover maps using crowdsourcing data provided by citizens, combined with data from remote sensing and drone images.The method uses a newly available land cover map, which, together with other data, is automatically geo-processed in ArcGIS to provide updated SCS curve numbers. This is achieved using Python programing language and the ArcPy libraries of ArcGIS. The updating of the pre-developed HEC-HMS hydrological model with new SCS curve numbers is implemented in MATLAB, through a specialized API for changing inputs to the HEC-HMS model. The whole process is executed via a GUI developed in MATLAB, which also allows to run the HEC-HMS automatically, and present updated results - discharge hydrographs.

Muhammad Haris Ali, Thaine H. Assumpção, Ioana Popescu, Andreja Jonoski
High-Performance Computing Applied in Project UBEST

UBEST aims at improving the global understanding of present and future biogeochemical buffering capacity of estuaries through the development of Observatories, computational web-portals that integrate field observation and real-time MPI (Message Passing Interface) numerical simulations. HPC (High-Performance Computing) is applied in Observatories to serve both on-the-fly frontend user requests for multiple spatial analyses and to speed up backend’s forecast hydrodynamic and ecological simulations based on unstructured grids. Backend simulations are performed using the open-source SCHISM (Semi-implicit Cross-scale Hydroscience Integrated System Model). Python programming language will be used in this project to automate the MPI simulations and the web-portal in Django.

Ricardo Martins, João Rogeiro, Marta Rodrigues, André B. Fortunato, Anabela Oliveira, Alberto Azevedo
A Comparative Study on Decision Support Approaches Under Uncertainty

This paper presents a comparative study between two approaches for decision support examined to the reference problem of energy retrofit and indoor environment quality in buildings. Two approaches will be presented. The first concerns a proposition based on multi-criteria analysis methodology and the second is based on Bayes’ mathematical theory. The aim is to examine ways of constructing decision support systems which can produce credible decisions under uncertainty. For this purpose, criteria relevant to the given problem will be examined, while attempting to choose those which need less measured or audited information to perform calculations.

Panagiotis Christias
Plastic Grabber: Underwater Autonomous Vehicle Simulation for Plastic Objects Retrieval Using Genetic Programming

We propose a path planning solution using genetic programming for an autonomous underwater vehicle. Developed in ROS Simulator that is able to roam in an environment, identify a plastic object, such as bottles, grab it and retrieve it to the home base. This involves the use of a multi-objective fitness function as well as reinforcement learning, both required for the genetic programming to assess the model’s behaviour. The fitness function includes not only the objective of grabbing the object but also the efficient use of stored energy. Sensors used by the robot include a depth image camera, claw and range sensors that are all simulated in ROS.

Gabrielė Kasparavičiūtė, Stig Anton Nielsen, Dhruv Boruah, Peter Nordin, Alexandru Dancu
Adaptation of Irrigation Systems to Current Climate Changes

Irrigation is a hydroameliorative measure, which involves controlled water management, in addition to natural water, to ensure and increase crop yield and harvest quality. The idea of artificially wetting agricultural crops to guarantee great produce has emerged since antiquity. Civilizations in arid areas of the globe have had to adapt to climate conditions to ensure their existence by developing irrigation systems that give them greater control over farming practices. Over time, due to the increase in greenhouse gas emissions, several climatic changes have taken place: temperatures have risen, precipitation patterns have changed, glaciers have melted, sea and ocean levels have increased. To ensure its existence, the contemporary population needs irrigation systems adapted to the current environmental conditions. In this point, Beia and the Polytechnic University of Bucharest have developed a decision support system for an irrigation system that considers parameters such as: air and soil humidity and temperature, plant evapotranspiration, precipitation intensity, wind direction and speed, and relative pressure, to ensure the efficient use of water and energy resources in agriculture. The decision support system aims to develop a startup command for irrigation pumps for a certain amount of time-based on the information received from the transducers. This order is passed to the farmer in the form of an irrigation report as a support in his decision, but the decision to use the proposed arrangement is exclusive to the farmer.

George Suciu, Teodora Ușurelu, Cristina M. Bălăceanu, Muneeb Anwar

QOD Workshop

Frontmatter
ADEQUATe: A Community-Driven Approach to Improve Open Data Quality

This paper introduces the ADEQUATe project—a platform to improve the quality of open data in a community-driven fashion. First, the context of the project is discussed: the issue of quality of open data, its relevance in Austria and how ADEQUATe attempts to tackle these matters. Then the main components of the project are introduced, outlining how they support the goals of the project: Portal Watch managing monitoring, quality assessment and enhancement of data, the ADEQUATe Knowledge Base providing the backbone to the search and semantic enrichment components, the faceted Search functionality, Dataset profiles presenting an enriched overview of individual datasets to users, ADEQUATe’s GitLab instance providing the community dimension to the portal, and Odalic, a tool for semantic interpretation of tabular data. The paper is concluded with an outlook to the benefits of the project: easier data discovery, increased insight to data evolution, community engagement leading to contribution by a wider part of the population, increased transparency and democratization as well as positive feedback loops with data maintainers, public administration and the private sector.

Lőrinc Thurnay, Thomas J. Lampoltshammer, Sebastian Neumaier, Tomáš Knap
Situation-Dependent Data Quality Analysis for Geospatial Data Using Semantic Technologies

In this paper we present a new way to evaluate geospatial data quality using Semantic technologies. In contrast to non-semantic approaches to evaluate data quality, Semantic technologies allow us to model situations in which geospatial data may be used and to apply costumized geospatial data quality models using reasoning algorithms on a broad scale. We explain how to model data quality using common vocabularies of ontologies in various contexts, apply data quality results using reasoning in a real-world application case using OpenStreetMap as our data source and highlight the results of our findings on the example of disaster management planning for rescue forces. We contribute to the Semantic Web community and the OpenStreetMap community by proposing a semantic framework to combine usecase dependent data quality assignments which can be used as reasoning rules and as data quality assurance tools for both communities respectively.

Timo Homburg, Frank Boochs
Indicating Studies’ Quality Based on Open Data in Digital Libraries

Researchers publish papers to report their research results and, thus, contribute to a steadily growing corpus of knowledge. To not unintentionally repeat research and studies, researchers need to be aware of the existing corpus. For this purpose, they crawl digital libraries and conduct systematic literature reviews to summarize existing knowledge. However, there are several issues concerned with such approaches: Not all documents are available to every researcher, results may not be found due to ranking algorithms, and it requires time and effort to manually assess the quality of a document. In this paper, we provide an overview of the publicly available information of different digital libraries in computer science. Based on these results, we derive a taxonomy to describe the connections between this information and discuss their suitability for quality assessments. Overall, we observe that bibliographic data and simple citation counts are available in almost all libraries, with some of them providing rather unique information. Some of this information may be used to improve automated quality assessment, but with limitations.

Yusra Shakeel, Jacob Krüger, Gunter Saake, Thomas Leich
Syntactical Heuristics for the Open Data Quality Assessment and Their Applications

Open Government Data are valuable initiatives in favour of transparency, accountability, and openness. The expectation is to increase participation by engaging citizens, non-profit organisations, and companies in reusing Open Data (OD). A potential barrier in the exploitation of OD and engagement of the target audience is the low quality of available datasets [3, 14, 16]. Non-technical consumers are often unaware that data could have potential quality issues, taking for grant that datasets can be used immediately without any further manipulation. In reality, in order to reuse data, for instance to create visualisations, they need to perform a data clean, which requires time, resources, and proper skills. This leads to a reduced chance to involve citizens.This paper tackles the quality barrier of raw tabular datasets (i.e. CSV), a popular format (Tim-Berners Lee tree-stars) for Governmental Open Data. The objective is to increase awareness and provide support in data cleaning operations to both PAs to produce better quality Open Data and non-technical data consumers to reuse datasets. DataChecker is an open source and modular JavaScript library shared with community and available on GitHub that takes in input a tabular dataset and generate a machine-readable report based on the data type inferencing (a data profiling technique). Based on it the Social Platform for Open Data (SPOD) provides quality cleaning suggestions to both PAs and end-users.

Donato Pirozzi, Vittorio Scarano
Access Control and Quality Attributes of Open Data: Applications and Techniques

Open Datasets provide one of the most popular ways to acquire insight and information about individuals, organizations and multiple streams of knowledge. Exploring Open Datasets by applying comprehensive and rigorous techniques for data processing can provide the ground for innovation and value for everyone if the data are handled in a legal and controlled way. In our study, we propose an argumentation and abductive reasoning approach for data processing which is based on the data quality background. Explicitly, we draw on the literature of data management and quality for the attributes of the data, and we extend this background through the development of our techniques. Our aim is to provide herein a brief overview of the data quality aspects, as well as indicative applications and examples of our approach. Our overall objective is to bring serious intent and propose a structured way for access control and processing of open data with a focus on the data quality aspects.

Erisa Karafili, Konstantina Spanaki, Emil C. Lupu

Doctoral Consortium

Frontmatter
Measures for Quality Assessment of Articles and Infoboxes in Multilingual Wikipedia

One of the most popular collaborative knowledge bases on the Internet is Wikipedia. Articles of this free encyclopaedia are created and edited by users from different countries in about 300 languages. Depending on topic and language version, quality of information there may vary. This study presents and classifies measures that can be extracted from Wikipedia articles for the purpose of automatic quality assessment in different languages. Based on a state of the art analysis and own experiments, specific measures for various aspects of quality have been defined. Additional, in this work they were also defined measures for quality assessment of data contained in the structural parts of Wikipedia articles - infoboxes. This study describes also an extraction methods for various sources of measures, that can be used in quality assessment.

Włodzimierz Lewoniewski
Supply Chain Modelling Using Data Science

The paper describes a research results in the field of supply chain modelling. The supply chain topology model, which will be the base for further analysis, is modelled as a network where nodes represent entities and business processes between them are presented as edges. A convenience stores network with a franchising business model was chosen for the model evaluation. The analysis uncovers conflicting goals between the franchisees and the franchise holder that must be taken into account. To conduct the research the Data Mining techniques, especially Process Mining (process design, process improvement, process analysis) will be used. First insight into Information needs is described.

Szczepan Górtowski
Behavioral Biometrics in Mobile Banking and Payment Applications

This paper presents an overview on the possible use of behavioral biometrics methods in mobile banking and payment applications. As mobile applications became more common, more and more users conduct payments using their smartphones. While requiring secure services, the customers often do not lock their devices and expose them to potential misuse and theft. Banks and financial institutions apply multiple anti-fraud and authentication systems - but to ensure the required usability, they must develop new ways to authenticate their users and authorize transactions. Answer to this problem comes with a family of behavioral biometric methods which can be utilized to secure those applications without hindering the usability. The goal of this paper is to describe potential areas in which behavioral biometrics can be used to ensure more secure mobile payments, increase usability and prevent frauds.

Piotr Kałużny
Modelling of Risk and Reliability of Maritime Transport Services

Maritime transport plays nowadays an important role in the global economy. In 2017 around 80% of trade was carried by sea, therefore there is a need for constant monitoring of transport processes from the point of view of their reliability and punctuality. In order to provide up-to-date information about reliability and punctuality, a lot of data from different maritime sources needs to be collected and analysed. The paper presents concepts of two methods that based on an analysis of big amount of maritime data provide information that might be used to support various entities from the maritime domain in decision-making. The first method concerns a short-term assessment of reliability of a maritime transport service, while the second one dynamically predicts punctuality of a ship. The presented methods are part of a PhD research. The aim of the article is to provide an overview of this research, starting from motivation, its objectives and the thesis, through presentation of the methods, up to description of the main results of methods’ evaluation.

Milena Stróżyna
Mixed Methods Approach as Requirements Analysis of a Method for Process Harmonization in Design Science Research

Process harmonization (PH) in the post-merger integration phase is essential for successful mergers & acquisitions (M&A). Especially in the service sector, the know-how is bundled in processes, thus PH is an essential success factor. In order to execute PH systematically under holistic consideration, a method for PH is desirable. The objective of the research project is the development of a corresponding artefact within the framework of the Design Science Research approach. To identify requirements for the artefact, the Mixed Methods (MiMe) research consisting of interviews with experts and questionnaire survey was used for the requirements analysis, whose results were already published in individual contributions. This article aims to evaluate critically the applied MiMe research and thus broaden its application in IS research. The result shows that the obtained meta-inferences hold a high validity and accordingly the requirements gained from MiMe research are an essential prerequisite for the development of an artifact for PH.

Irene Schönreiter
Predicting Customer Churn in Electronic Banking

The following paper is an outline of the current author’s research on the churn prediction in electronic banking. The research is based on real anonymised data of 4 million clients from one of the biggest Polish banks. Access to real data in such scale is a substantial strength of the study, as many researchers often do use only small data sample from a short period. Even though current research is still preliminary and ongoing, unlimited access to these data provides a great environment for further work. The study strongly connects with real business goals and trends in the banking industry as the author is also a practitioner. Described research focuses on methods for predicting customers who are likely to leave electronic banking. It contributes especially in further classification of an electronic churn and a broader definition of customer churn in general. Recommended solutions should contribute to the increase in the number of digital customers in the bank.

Marcin Szmydt
Assessing Process Suitability for AI-Based Automation. Research Idea and Design

Recent advancements in Big Data and Machine Learning (ML) have triggered progressive adoption of Artificial Intelligence (AI) in the enterprise domains to address growing business process complexity. What is yet largely missing in the traditional Business Process Management (BPM) approaches, are formal frameworks and guidelines for decision making support when applying AI in a company. Proposed research aims to extend existing BPM frameworks and guidelines by novel methods, this way increasing understanding of business processes in the view of recent technology developments in Big Data, AI and ML.

Aleksandra Revina
Backmatter
Metadata
Title
Business Information Systems Workshops
Editors
Witold Abramowicz
Dr. Adrian Paschke
Copyright Year
2019
Electronic ISBN
978-3-030-04849-5
Print ISBN
978-3-030-04848-8
DOI
https://doi.org/10.1007/978-3-030-04849-5

Premium Partner