Skip to main content

2019 | Buch

Business Information Systems Workshops

BIS 2019 International Workshops, Seville, Spain, June 26–28, 2019, Revised Papers

insite
SUCHEN

Über dieses Buch

This book constitutes revised papers from the nine workshops and one accompanying event which took place at the 22nd International Conference on Business Information Systems, BIS 2019, held in Seville, Spain, in June 2019. There was a total of 139 submissions to all workshops of which 57 papers were accepted for publication.

The workshops included in this volume are: AKTB 2019: 11th Workshop on Applications of Knowledge-Based Technologies in Business BITA 2019: 10th Workshop on Business and IT Alignment BSCT 2019: Second Workshop on Blockchain and Smart Contract Technologies DigEX 2019: First International Workshop on transforming the Digital Customer Experience iCRM 2019: 4th International Workshop on Intelligent Data Analysis in Integrated Social CRM iDEATE 2019: 4th Workshop on Big Data and Business Analytics Ecosystems ISMAD 2019: Workshop on Information Systems and Applications in Maritime Domain QOD 2019: Second Workshop on Quality of Open Data SciBOWater 2019: Second Workshop on Scientific Challenges and Business Opportunities in Water Management

Inhaltsverzeichnis

Frontmatter

AKTB Workshop

Frontmatter
A Practical Grafting Model Based Explainable AI for Predicting Corporate Financial Distress

Machine learning and deep learning are all part of artificial intelligence and have a great impact on marketing and consumers around the world. However, the deep learning algorithms developed from the neural network are normally regarded as a black box because their network structure and weights are unable to be interpreted by a human user. In general, customers in the banking industry have the rights to know why their applications have been rejected by the decisions made by black box algorithms. In this paper, a practical grafting method was proposed to combine the global and the local models into a hybrid model for explainable AI. Two decision tree-based models were used as the global models because their highly explainable ability could work as a skeleton or blueprint for the hybrid model. Another two models including the deep neural network and the k-nearest neighbor model were employed as the local models to improve accuracy and interpretability respectively. A financial distress prediction system was implemented to evaluate the performance of the hybrid model and the effectiveness of the proposed grafting method. The experiment results suggested the hybrid model based on the terminal node grafting might increase the accuracy and interpretability depending on the chosen local models.

Tsung-Nan Chou
Data Analytics in the Electronic Games

This paper aims at the use of data analytics methods in mobile games. The main goal was to predict future purchases of players in the selected mobile game. The result presents the information about whether the player is going to buy any of the offered bonus packages or not. This information is crucial for marketing and possible ways of monetization. From the perspective of data analytics, the goal is the creation of a classification model in line with the CRISP-DM methodology. We used the following algorithms in the modeling phase: Random forest, Naive Bayes, Linear regression, XGBoost, and Gradient Boosting. All generated models were evaluated by contingency tables, which presented models accuracy as the ration between successfully predicted values to all predicted samples. The results are plausible and have the potential to be deployed into practice as a baseline model or support for personalized marketing activities.

Tomáš Porvazník, František Babič, Ľudmila Pusztová
Evaluating the Interdependent Effect for Likert Scale Items

Likert scale items are used for surveys exploring attitudes by collecting responses to particular questions or groups of related statements. The common practice is asking respondents to express their level of agreement by applying the seven or five-point scale from ‘strongly disagree’ to ‘strongly agree’. Although the Likert scale methodology serves as a powerful tool for asking attitudinal questions and getting measurable answers from the respondents, the surveys fail to identify level of importance of the individual questions used for characterizing the explored phenomena. Moreover, the Likert scale methodology does not enable to distinguish which of the questions are the causes, and which of them are the effects of the explored problem. The objective of the research is to propose an original method for evaluating the cause-effect relationship and strength of interdependences among Likert scale items. The classical DEMATEL technique is modified for analysis of the interdependence among factors in order to overcome the subjective origin of expert evaluations, generally applied for its implementation. The Spearman Rank Order Correlations of Likert scale items were explored as a consistent replacement of subjective group direct-influence matrix. The modified influential relation map built for Likert scale items revealed improvement scopes by defining causal relationships and significance of the questions of the survey, and added value for long-term strategic decision making. The viability of the proposed model is illustrated by a case study of service quality survey data collected at the rehabilitation hospital in Poland.

Dalia Kriksciuniene, Virgilijus Sakalauskas, Roman Lewandowski
Knowledge-Based UML Use Case Model Transformation Algorithm

Transforming and generating models is a meaningful process in Model Driven Engineering (MDE). Theoretical and practical researches for MDE have remarkably progressed recently in managing with the increase of complexity within information systems (IS) during their development and support processes by growing the level of abstraction using different kinds of models as information storage – as knowledge storage of problem domain. As models expand in use for developing systems, the possible transformation among models grows in importance. The main scope of the article is to present transformation algorithm of Unified Modelling Language Use Case model generation from Enterprise Model (EM). The transformation algorithm is presented in details and depicted by steps. The presented generation process steps are illustrated by particular UML Use Case example following the transformation algorithm step by step.

Ilona Veitaite, Audrius Lopata
Design of a Social-Based Recommendation Mechanism for Peer-to-Peer Insurance

Peer-to-peer insurance platforms are prospering due to the development of financial technology, but it is difficult to find suitable co-insurers group without risk considering at current online platforms. To improve the advantage of peer-to-peer insurance, in this research, we design a social-based co-insurers recommendation mechanism through analyzing users’ inclination, posts, background, similarity, and relationship to put peers-risk-sharing idea into realistic action.

Jyh-Hwa Liou, Ting-Kai Hwang, Sai-Nan Wu, Yung-Ming Li
Mining Personal Service Processes
Towards a Conceptualization for the Time Perspective

The process of digital transformation opens more and more domains to data driven analysis. This also accounts for Process Mining of service processes. This work investigates the use of Process Mining in the domain of Personal Services with a special focus on the Time Perspective and the early stages of the mining process. Based on a literature analysis as well as expert and focus group interviews with practitioners in family care, it is shown that a shift in used approaches and concepts of Process Mining is required in order to meet the requirements of the domain. Furthermore, a conceptualization for describing the Time Perspective in Process Mining is suggested.

Birger Lantow, Tom Baudis, Fabienne Lambusch
Company Investment Recommendation Based on Data Mining Techniques

There are about seventy thousand companies listed on various stock markets worldwide and there is public information on about three hundred thousand companies on Wikipedia but that is only a small fraction of all companies. Among the millions others are hiding the future technological innovators, market disruptors and best possible investments. So, if an investors has an example of the kind of company they are interested in, how can they successfully find other such investment options without sifting through millions of options?We propose non-personalized recommendation approach for alternatives of company investments. This method is based on data mining techniques for investment behaviour modelling. The investment opportunities are discovered using the idea of transfer learning of indirectly associated company investments. This allows companies to diversify their investment portfolio. Experiments are run over a dataset of 7.5 million companies, of which the model focuses on startups and investments in the last 3 years. This allows us to investigate most recent investment trends. The recommendation model identifies top-N investment opportunities. The evaluation of the proposed investment strategies show high accuracy of the recommendation system.

Svetla Boytcheva, Andrey Tagarev

BITA Workshop

Frontmatter
An Exploration of Enterprise Architecture Research in Hospitals

In the age of digitalization, enterprises, such as hospitals, have to be able to quickly react on new treatment patterns. The enterprise architecture management as a holistic view concept concerning an enterprises’ IT infrastructure addresses such challenges, as the aim of the procedure is to foster the business-it-alignment. Considering, that analyzing IT infrastructures and respectively the maturity of them in hospitals is very complex for managers and stakeholders, an overview about the conducted research within this area is necessary. Therefore, this research provides a systematic literature review to represent an integral sight about relevant publications regarding the enterprise architecture management in hospitals.

Johannes Wichmann, Matthias Wißotzki
In Search for a Viable Smart Product Model

Smart products integrate physical and digital materialities, taking advantage of sensors, mobile technologies, and advanced data processing capabilities. This type of systems is a top priority for managers, offering the capacity to sense, communicate, adapt, and anticipate the needs of business stakeholders. We use the lens of the Viable System Model (VSM) theory to align business strategies and smart products. The proposed model was tested in a real case of information systems development for safety in construction. The findings emerge from a design science research that is part of a larger project to introduce smart technologies in the construction industry. A viable product model (VPM) represents the necessary and sufficient conditions for the smart product cohesion and endurance in different environments, aligned to the business needs. For theory, we present a product-level adoption of VSM and propose guidelines for business-smart product alignment. For practice, the results can assist managers in creating new smart products that adhere to their strategy and capable of dealing with unexpected events.

João Barata, Paulo Rupino da Cunha
Strategic IT Alignment and Business Performance in SMEs: An Empirical Investigation

New competitive challenges have forced Small-Medium Enterprises (SMEs) to re-examine their internal environment in order to improve competitive advantage. IT investments can improve firm performance in a way that it would be in “alignment” with business strategy. The purpose of this paper is to analyze the contemporary impact of IT and business strategy on Information Systems (IS) planning success, incorporating all these constructs into a model that is tested using Regression Analysis. Data were collected from IS executives in Greek SMEs.

Fotis Kitsios, Maria Kamariotou
Enterprise Computing: A Case Study on Current Practices in SAP Operations

IT administrators are fundamental to effective IT operations but are challenged by outsourcing and technological as well as organizational change. This paper investigates how four companies in their operations of SAP-focused enterprise systems organize and carry out tasks. A literature-based frame of inquiry is used to gather information from four interviewees working in SAP administration and from two business representatives. This study sheds light on how current state-of-the-art operations concepts are applied in the day-to-day business of operating grown proprietary and hybrid (cloud) application system landscapes.

Johannes Hintsch, Klaus Turowski
Integration of Enterprise Modeling and Ontology Engineering as Support for Business/IT-Alignment

To create viable digital products from business and technical perspective, stakeholders from different enterprise functions should be involved. The focus of the paper is on the use of modelling approaches, in particular ontologies and enterprise models, for specifying the requirements to new services or products. There are hardly any reports of ontology usage in business and IT-alignment efforts, although ontologies can be used to capture the shared understanding of important concepts between business and IT stakeholders. The paper aims to bridge the gap between representations adequate for business stakeholders and ontology representations. The integration of a (business-oriented) enterprise modeling method with an (IT-oriented) ontology construction method is proposed by developing a method component. The main contributions of the paper are (1) a method component integrating enterprise modeling and ontology construction procedures, (2) an application case motivating the method component and showing its usage, and (3) first experiences in using the method component.

Kurt Sandkuhl, Holger Lehmann, Tom Sturm
Towards Aligning IT and Daily Routines of Older Adults

Different IT solutions exist for supporting older adults. However, the ways in which these solutions are positioned usually concern only specific activities or specific problems. In this paper we analyze the current research state regarding activities of older adults, the contexts of their activities, and supportive IT solutions to create a background for further investigations regarding alignment of IT solutions and daily routines of older adults. The paper presents illustrative mapping between older adult activities and IT solution categories and derives therefrom some aspects that are to be taken into account for successful alignment between IT solutions and daily routines of older adults.

Marite Kirikova, Ella Kolkowska, Piotr Soja, Ewa Soja, Agneta Muceniece
Organizational Challenges of Digitalization Initiatives in Tourism Network Management Organizations

The tourism industry with mainly small and medium sized enterprises (SMEs) is strongly influenced by digitalization, though, confronting the actors with challenges. As network management organizations facilitate cross-cutting themes across their network, digitalization could be assumed to be one of them. In a case study approach this paper investigates the status and role of the Destination Management Organization (DMO) in digitalization initiatives, modes of stakeholder participation and related organizational challenges. The findings point to a tension between heterogeneity of the sector and required homogeneity of the touristic experience of the visitor. While the DMO’s leadership role for digitalization is expected, it has not been fully embraced yet. The case study identifies a set of internal and external challenges impeding its implementation beyond single digital projects. The paper suggests collaborative portfolio governance for improved participation, flexibility and transparency. Limited by the single case study data, further research is recommended with additional case studies, quantitative data and incorporating findings from other network actors. Researching the operability and effects of the collaborative portfolio governance approach is subject to future research.

Susanne Marx
A Configurational Approach to Task-Technology Fit in the Healthcare Sector

In spite of strong investments in digital technologies in the healthcare and medical services domain over the past couple of decades, one of the most pressing issues is that in many cases the technologies that are adopted to support the everyday tasks of professionals are often not used as intended, or even not used at all. A growing number of studies have also noted negative impacts in many circumstances when professionals such technologies them into their work tasks. This poses a major concern as investments in supporting technologies are often hindering efforts of professionals rather than enabling them. Following a task-technology fit approach we build on a sample of 445 health and medical service professionals working in Norway. This study explores the configurations of elements that lead to positive and negative impacts when using digital technologies to support work. To derive results, we utilize a fuzzy set qualitative comparative analysis (fsQCA) to showcase that there are several different configurations of tasks, technologies, and use practices that can either help produce positive impacts or create negative ones.

Patrick Mikalef, Hans Yngvar Torvatn
Ontology-Based Fragmented Company Knowledge Integration: Multi-aspect Ontology Building

Early steps of ongoing digital transformation in companies is often driven by local business needs and results in usage of various information systems aimed at assisting in solving particular tasks. However, this leads to a dead end and further transformation is not possible without integration, what, in turn, requires interoperability support. The solution that seems to be the most simple is to use one complex system. However, various legacy systems used in various departments of companies have accumulated large volumes of corporate information and knowledge that are not easy to transfer into another system. Besides, spending time of specialists to learn different systems instead of those they are used to is not the best idea either. Though, the problem technical interoperability support is solved through usage of commonly accepted standards, the semantic interoperability is still an issue. In the previous paper, research results related to selection of the most appropriate solution for building a common information model enabling seamless knowledge exchange preserving existing information models was presented. This paper makes a step further through building a multi-aspect ontology taking into account differences between terminologies used in various information systems.

Nikolay Shilov, Nikolay Teslya

BSCT Workshop

Frontmatter
Comparing Market Phase Features for Cryptocurrency and Benchmark Stock Index Using HMM and HSMM Filtering

A desirable aspect of financial time series analysis is that of successfully detecting (in real time) market phases. In this paper we implement HMMs and HSMMs with normal state-dependent distributions to Bitcoin/USD price dynamics, and also compare this with S&P 500 price dynamics, the latter being a benchmark in traditional stock market behaviour which most literature resorts to. Furthermore, we test our models’ adequacy at detecting bullish and bearish regimes by devising mock investment strategies on our models and assessing how profitable they are with unseen data in comparison to a buy-and-hold approach. We ultimately show that while our modelling approach yields positive results in both Bitcoin/USD and S&P 500, and both are best modelled by four-state HSMMs, Bitcoin/USD so far shows different regime volatility and persistence patterns to the one we are used to seeing in traditional stock markets.

David Suda, Luke Spiteri
Contagion in Bitcoin Networks

We construct the Google matrices of bitcoin transactions for all year quarters during the period of January 11, 2009 till April 10, 2013. During the last quarters the network size contains about 6 million users (nodes) with about 150 million transactions. From PageRank and CheiRank probabilities, analogous to trade import and export, we determine the dimensionless trade balance of each user and model the contagion propagation on the network assuming that a user goes bankrupt if its balance exceeds a certain dimensionless threshold $$\kappa $$. We find that the phase transition takes place for $$\kappa < \kappa _c \approx 0.1$$ with almost all users going bankrupt. For $$\kappa > 0.55$$ almost all users remain safe. We find that even on a distance from the critical threshold $$\kappa _c$$ the top PageRank and CheiRank users, as a house of cards, rapidly drop to the bankruptcy. We attribute this effect to strong interconnections between these top users which we determine with the reduced Google matrix algorithm. This algorithm allows to establish efficiently the direct and indirect interactions between top PageRank users. We argue that this study models the contagion on real financial networks.

Célestin Coquidé, José Lages, Dima L. Shepelyansky
Towards Blockchain and Semantic Web

Blockchain has become a pervasive technology in a wide number of sectors like industry, research, and academy. In the last decade a large number of tailored-domain problems have been solved thanks to the blockchain. Due to this reason, researchers expressed their interest in combining the blockchain with other well-known technologies, like Semantic Web. Unfortunately, as far as we known, in the literature no one has presented the different scenarios in which Semantic Web and blockchain can be combined, and the further benefits for both. In this paper, we aim a providing an in-depth view of the beneficial symbiotic relation that these technologies may reach together and report the different scenarios that we have identified in the literature to combine Semantic Web and blockchain.

Juan Cano-Benito, Andrea Cimmino, Raúl García-Castro
Detecting Brute-Force Attacks on Cryptocurrency Wallets

Blockchain is a distributed ledger, which is protected against malicious modifications by means of cryptographic tools, e.g. digital signatures and hash functions. One of the most prominent applications of blockchains is cryptocurrencies, such as Bitcoin. In this work, we consider a particular attack on wallets for collecting assets in a cryptocurrency network based on brute-force search attacks. Using Bitcoin as an example, we demonstrate that if the attack is implemented successfully, a legitimate user is able to prove that fact of this attack with a high probability. We also consider two options for modification of existing cryptocurrency protocols for dealing with this type of attacks. First, we discuss a modification that requires introducing changes in the Bitcoin protocol and allows diminishing the motivation to attack wallets. Second, an alternative option is the construction of special smart-contracts, which reward the users for providing evidence of the brute-force attack. The execution of this smart-contract can work as an automatic alarm that the employed cryptographic mechanisms, and (particularly) hash functions, have an evident vulnerability.

E. O. Kiktenko, M. A. Kudinov, A. K. Fedorov
Analyzing Transaction Fees with Probabilistic Logic Programming

Fees are used in Bitcoin to prioritize transactions. Transactions with high associated fee are usually included in a block faster than those with lower fees. Users would like to pay just the minimum amount to make the transaction confirmed in the desired time. Fees are collected as a reward when transactions are included in a block so, on the other perspective, miners usually process first the most profitable transactions, i.e. the one with higher fee rate. Bitcoin is a dynamic system influenced by several variables, such as transaction arrival time and block discovery time making the prediction of the confirmation time a hard task. In this paper we use probabilistic logic programming to model how fees influence the confirmation time and how much fees affect miner’s revenue.

Damiano Azzolini, Fabrizio Riguzzi, Evelina Lamma
An On-Chain Method for Automatic Entitlement Management Using Blockchain Smart Contracts

Managing the entitlements for the compliant use of digital assets is a complex and labour-intensive task. As a consequence, implemented processes tend to be slow and inconsistent. Automated approaches have been proposed, including systems using distributed ledger technology (blockchains), but to date these require additional off-chain sub-systems to function. In this paper, we present the first approach to entitlement management that is entirely on-chain, i.e. the functionality for matching the digitally encoded rights of content owners (expressed in ODRL) and the request for use by a customer are checked for compliance in a smart contract. We describe the matching algorithm and our experimental implementation for the Ethereum platform.

Timothy Nugent, Fabio Petroni, Benedict Whittam Smith, Jochen L. Leidner
Study of Factors Related to Grin Cryptocurrency Mining Efficiency with GPUs

Grin cryptocurrency is one of the most recent implementations of Mimblewimble – specific stripped-down blockchain design as basis for strong privacy and good scalability features, which are complementing each other. Considering recent Grin inception and ongoing development there is a lack of research providing guidelines for optimizing mining efficiency in Grin’s network. In this study we aimed to experimentally test the influence of some factors on GPU mining efficiency, which should be taken into account. We have found, that choosing the right combination of mining software and GPU specific parameters are among them. In general, NVIDIA and AMD GPUs were comparably effective. The results may be relevant for creators of alternative cryptocurrencies, for which GPU mining is a part of design.

Paulius Danielius, Tomas Savenas, Saulius Masteika
Towards Blockchain-Based E-Voting Systems

Electronic voting is one of the most challenging cryptographic problems, since the developed system should guarantee strong and sometimes contrasting security properties. Blockchain technology can be of help providing for free some important guarantees such as the immutability and transparency of the votes using a distributed ledger. In this paper we propose a blockchain based e-voting system, which is lightweight, since it does not rely on strong cryptographic primitives, and efficient, since it improves over previous proposals in terms of both execution time and associated cost for the required infrastructure. We provide the description of a proof of concept system together with the cost and performance analysis.

Chiara Braghin, Stelvio Cimato, Simone Raimondi Cominesi, Ernesto Damiani, Lara Mauri
Internet of Things and Blockchain Integration: Use Cases and Implementation Challenges

Research on blockchain (BC) and Internet of things (IoT) shows that they can be more powerful when combined or integrated together. However, the technologies are still emerging and face a lot of challenges. The paper focuses on Internet of things integration with the blockchain technology. We reviewed these technologies and identified some use cases of their combination and key issues hindering their integration. These issues are scalability, interoperability, inefficiencies, security, governance and regulation. While these issues are inherent in the current generations of blockchain such as Bitcoin and Ethereum respectively, with a well-designed architecture, the majority of these issues can be solved in the future generation. This work is inspired by the rapid growth in the number of connected devices and the volume of data produced by these devices and the need for security, efficient storage and processing.

Kelechi G. Eze, Cajetan M. Akujuobi, Matthew N. O. Sadiku, Mohamed Chouikha, Shumon Alam
Wikipedia as an Information Source on Cryptocurrency Technology

The paper is an initial study that aims to analyze relations between Wikipedia as an unbiased source of information and cryptocurrency technologies. The purpose of the research is to explore how diversified decentralized cash systems are presented and characterized in the largest open-source knowledge base. Additionally, the interactions between information demand in different language versions are elaborated. A model is proposed that allows to assess the adoption of cryptocurrencies in a given country on the basis of the mentioned knowledge. The results can be used not only for the analysis of popularity of blockchain technologies in different local communities, but also can show which country has the biggest demand on particular cryptocurrency, such as Bitcoin, Ethereum, Ripple, Bitcoin Cash, Monero, Litecoin, Dogecoin and other.

Piotr Stolarski, Włodzimierz Lewoniewski

DigEX Workshop

Frontmatter
Towards Analyzing High Street Customer Trajectories - A Data-Driven Case Study

Nowadays, many high streets face the problem of declining attractiveness. City management and high street retailers hardly know their customers. Therefore, they can scarcely react to the customers’ needs and wishes to make the customer experience more attractive. We aim at studying how customers behave in high streets using a data-driven approach by recording and evaluating customer trajectories with modern positioning technology. For doing so, we carry out a pilot test in order to check whether our approach is suitable for recognizing customer behavioral patterns. The result obtained by a cluster analysis reveals five clusters in the analyzed customer trajectories.

C. Ingo Berendes
How Are Negative Customer Experiences Formed? A Qualitative Study of Customers’ Online Shopping Journeys

This study investigates how negative customer experiences are formed during customers’ online shopping journeys. A qualitative, in-depth dataset collected from 34 participants was employed to identify negatively perceived touchpoints that contribute to the customer experience in a negative way. The findings reveal that negative touchpoints are experienced during customers’ entire journeys, particularly after a purchase is completed. We identified 152 negative touchpoints from the data, of which 53 were experienced during search and consideration, 35 when finalizing a purchase, 33 during delivery, and 31 during after-sales interactions with the company. Within these four main categories, 20 subthemes describing the touchpoints and formation of customers’ negative experiences were identified therein. The findings highlight the importance of understanding the holistic customer experience formation, including the before- and after-purchase phases of the online shopping journey. In practice, the findings can be utilized in online service design and improvement.

Tiina Kemppainen, Lauri Frank
A Model to Assess Customer Alignment Through Customer Experience Concepts

Business and Information Technology alignment has been one of the main concerns for IT and Business executives due to its importance to the overall company performance. In the Business and IT alignment area, there is a lack of research works and approaches to measure the organizations’ alignment level with external customers. However, this alignment level is today relevant as customers have become more exigent and digitally connected and they have increased their negotiation power. To fulfil this lack, this paper presents the design and application of a maturity model for customer alignment measurement. The originality of our approach is that the model embraces digital transformation concepts aimed at measuring the experience level that the organization is offering to customers through the customer lifecycle.

Leonardo Muñoz, Oscar Avila
Understanding Users’ Preferences for Privacy and Security Features – A Conjoint Analysis of Cloud Storage Services

Digital transformation has produced different applications and services for personal use. In an interconnected world, privacy and security concerns become main adoption barriers of new technologies. IT companies face an urgent need to address users’ concerns when delivering convenient designs. Applying conjoint analysis (CA) from consumer research, we explore users’ preferences and willingness-to-pay for privacy preserving features in personal cloud storage. Our contributions are two-fold: For research, we demonstrate the use of CA in understanding privacy tradeoffs for the design of personal ICTs. For practice, our findings can inform service designers about preferred privacy and security options for such services.

Dana Naous, Christine Legner
The Role of Location Dependent Services for the Success of Local Shopping Platforms

Competitors and customers put Local Owner-Operated Retail Outlets (LOORO) under digitalization pressure. Local Shopping Platforms (LSP) seem to offer a promising approach to overcoming the manifold e-commerce adoption barriers for LOOROs. However, the business model of LSPs focuses on services that support the online Point of Sale (PoS). Therefore, it is crucial for LOOROs to know if customers will accept and use LSPs and what are the main drivers. This paper presents a survey of customers of a medium-sized town in Germany investigating these factors. Results show that location dependent services are crucial for the success of LSPs.

Lars Bollweg, Richard Lackes, Markus Siepermann, Peter Weber

iCRM Workshop

Frontmatter
Social CRM Services in Digital Marketing Agencies: A Preliminary Study on Service Offerings in Germany

Outsourcing of Digital Marketing services is a commonly adopted strategy in the market. The increasing importance of the Social CRM topic and its data analysis feature sheds light on specific changes that might occur on how Digital Marketing Agencies offer their services, as well as on outsourcing practices. This work analyzes the Social CRM services provided by these agencies and validates them according to the German market. It focuses on a preliminary phase within the research process, which aims at discovering outsourcing practices within German SMEs. The results support the adaptation and validation of the Social CRM services according to the German Digital Marketing Agencies. Additionally, it provides an overview on the types of service German agencies provide, as well as the practical implications for these companies.

Julio Viana, Maarten van der Zandt, Olaf Reinhold, Rainer Alt
Social Network Advertising Classification Based on Content Categories

Social media usage is expanding in different sectors of society. As a consequence, a large amount of User-Generated-Content is produced every day. Due to its different effects on users, content management is essential for business advertising on these platforms. However, in view of social media’s large volume of content, measuring the effect on users entails high costs and effort. This paper examines the use of machine learning techniques to reduce the cost and effort of this kind of analysis. To this end, an automatic document classification is employed to check its viability by testing it in the companies publications in Facebook. The results show that the machine learning classifier obtained has an excellent potential to measure effectiveness and analyze a significant amount of content with more efficiency. The classifier has practical implications since it allows an extensive competitor analysis to be conducted and is also able to influence social media campaigns.

Gustavo Nogueira de Sousa, Gustavo R. Almeida, Fábio Lobato

iDEATE Workshop

Frontmatter
Developing an Artificial Intelligence Capability: A Theoretical Framework for Business Value

Despite the claim that Artificial Intelligence (AI) can revolutionize the way private and public organizations do business, to date organizations still face a number of obstacles in leveraging such technologies and realizing performance gains. Past studies in other novel information technologies argue that organizations must develop a capability of effectively orchestrating and deploying necessary complementary resources. We contend that if organizations aim to realize any substantial performance gains from their AI investments, they must develop and promote an AI Capability. This paper theoretically develops the concept of an AI capability and presents the main dimensions that comprise it. To do so, we ground this concept in the resource-based view of the firm and by surveying the latest literature on AI, we identify the constituent components that jointly comprise it.

Patrick Mikalef, Siw Olsen Fjørtoft, Hans Yngvar Torvatn
Measuring Qualitative Performance Criteria with Fuzzy Sets

In this work-in-progress paper the use of a fuzzy set controller is explored as a measurement instrument for qualitative performance criteria in a context in which organizations have a strategic partnership. Organizations struggle to get a grip on qualitative performance criteria, such as trust, information transparency, etc., to monitor the health of their relationships. First steps, i.e. the development of specialized tooling and actual field experiments will be discussed, some challenges and future directions will be presented.

Harry Martin
SmartM: A Non-intrusive Load Monitoring Platform

Real-time energy consumption monitoring is becoming increasingly important in smart energy management as it provides the opportunity for novel applications through data analytics, including anomaly detection, energy leakage, and theft. This paper presents a smart non-intrusive load monitoring approach for residential households, collecting fine-grained energy consumption data and disaggregating the data of appliances. The paper describes the implementation of the monitoring system, the data set, load disaggregation, and the challenges for future work.

Xiufeng Liu, Simon Bolwig, Per Sieverts Nielsen
Towards a Digitized Understanding of the Skilled Crafts Domain

Skilled workers are over-proportionally exposed to physical stress and hazards, which often means that their work is characterized by high physical demands. In this paper we deal with a proof of concept, which uses wearable sensors to monitor the movement of workers to automatically identify working gestures and poses which result in high physical stresses. Based on a rating system a real-time alerting on unhealthy positions is initiated. Apart from solutions concerning individual persons, time series data from all the workers at the site can be used to create a smart schedule optimizing the process flow and minimizing individual physical stresses. Altogether, we use the interrelations between everyday behavior and health problems to approach one of the greatest common goals of both employers and employees – the goal of “staying healthy”.

Maximilian Derouet, Deepak Nagaraj, Erik Schake, Dirk Werth

Open Access

Competing for Amazon’s Buy Box: A Machine-Learning Approach

A key feature of the Amazon marketplace is that multiple sellers can sell the same product. In such cases, Amazon recommends one of the sellers to customers in the so-called ‘buy-box’. In this study, the dynamics among sellers for occupying the buy-box was modelled using a classification approach. Italy’s Amazon webpage was crawled during ten months and features from products analyzed to estimate the more relevant ones Amazon could consider for a seller occupy the buy-box. Predictive models showed that the more relevant features are the ratio between consecutive prices in products and their number of assessment received by customers.

Álvaro Gómez-Losada, Néstor Duch-Brown

ISAMD Workshop

Frontmatter
Spatial Query Processing on AIS Data Streams in Data Stream Management Systems

Spatio-temporal data streams from moving objects have become ubiquitous in the recent years, not only, but also in the maritime domain. The Automatic Identification System (AIS) is an important technology in the maritime domain that creates huge amounts of streaming moving object data and enables new use cases. The data streams can improve the situation awareness, help Vessel Traffic Services (VTSs) to get an overview of certain situations and detect upcoming critical situations automatically. For these use cases, queries have to be processed on data streams with continuous results and little delay.To reach this goal, Data Stream Management Systems (DSMSs) lay a foundation to process data streams, but lack the capabilities for spatio-temporal query processing. We tackle this research gap with techniques known from moving object databases and integrate those into the stream processing. We present a system that integrates the moving object algebra from moving object databases into the interval approach from data stream processing to run queries on AIS data. This new approach allows us to define very diverse spatio-temporal queries on AIS data streams, such as radius queries, k-nearest neighbors (kNN) queries as well as queries with moving polygons. Additionally, the approach allows us to use short-time prediction to detect situations before they occur, e. g., to avoid collisions. Our results show that the system is very flexible, offers a clear semantics and produces results on AIS streams with many vessels with low latency.

Tobias Brandt, Marco Grawunder
A Study of Vessel Trajectory Compression Based on Vector Data Compression Algorithms

With the development of information technology and its vast applications in vessel traffic, such as the popular Automatic Identification System (AIS), a large quantity of vessel trajectory data has been recorded and stored. Vessel traffic has also entered the age of big data. However, the redundancy of data considerably reduces the availability of research and applications, and how to compress these data becomes a problem that needs to be solved. In this paper, several classical vector data compression algorithms are summarized, and the ideas of each algorithm and the steps to compress vessel trajectories are introduced. The vessel trajectory compression experiments based on the algorithms are performed. The results are analyzed, and the characteristics of each algorithm are summarized. The results and conclusions lay the foundation for the selection and improvement of the algorithms in vessel trajectory compression. Through the study of this paper, a systematic theoretical support for the compression of vessel trajectories is provided, which could guide practical applications.

Yuanyuan Ji, Wenhai Xu, Ansheng Deng
OCULUS Sea™ Forensics: An Anomaly Detection Toolbox for Maritime Surveillance

Maritime Surveillance command and control (C2) systems play a crucial role in ensuring the marine traffic safety and maritime border security. Through efficient integration of various sources (UAV, aircrafts, GIS data) and legacy systems (e.g. AIS, Radar, VMS) a more complete situational awareness picture of the activities at sea can be accomplished. This enhanced knowledge can be used to improve the detection capabilities related to vessel anomaly behavior and increase the efficiency, coordination, and quality of operational activities against existed maritime threats. In this paper, we present the Forensics toolbox of the OCULUS Sea maritime surveillance C2 platform which offers vessel anomaly behavior detection functionalities such as (i) Gap in Reporting, (ii) Speed Change, (iii) Fake MMSI, (iv) Risk Incident, and (v) Collision Notification. The performance effectiveness of the Forensics toolbox has been successfully tested under real world scenarios while its further enhancement is a work in progress.

Stelios C. A. Thomopoulos, Constantinos Rizogannis, Konstantinos Georgios Thanos, Konstantinos Dimitros, Konstantinos Panou, Dimitris Zacharakis
Correcting the Destination Information in Automatic Identification System Messages

The Automatic Identification System is a self-reporting system used by vessels and was introduced to enhance the operational picture on ship bridges. The Automatic Identification System destination port setting contains relevant information to anticipate a vessel’s path. In future mixed traffic situations, autonomous vessels depend on correct destination port information specifically of human-operated ships to prevent dangerous encounter situations. In our Automatic Identification System data recordings of the last three months of 2018 a total of 4.988 unique vessels passing the German Bight with 13,216 different destinations were found. We found that at least 52.2% of all vessel destination settings are erroneous and a total of 1.3% (172) of the destination field settings were entirely conforming to the IMO UN/LOCODE recommendations. Our sample data indicates that no improvement in the percentage of correct destination settings has been made. Different to earlier studies, we report and quantify all eight error categories that we found and propose an algorithm that automatically adjusts the destination field settings. From those destination settings that two humans were independently of each other able to correct just by consulting a port and offshore dictionary (77,1%) the algorithm was able to correct 53,38% of the messages.

Matthias Steidel, Arne Lamm, Sebastian Feuerstack, Axel Hahn

QOD Workshop

Frontmatter
A New Tool for Automated Quality Control of Environmental Time Series (AutoQC4Env) in Open Web Services

We report on the development of a new software tool (AutoQC4Env) for automated quality control (QC) of environmental time series data. Novel features of this tool include a flexible Python software architecture, which makes it easy for users to configure the sequence of tests as well as their statistical parameters, and a statistical concept to assign each value a probability of being a valid data point. There are many occasions when it is necessary to inspect the quality of environmental data sets, from first quality checks during real-time sampling and data transmission to assessing the quality and consistency of long-term monitoring data from measurement stations. Erroneous data can have a substantial impact on the statistical data analysis and, for example, lead to wrong estimates of trends. Existing QC workflows largely rely on individual investigator knowledge and have been constructed from practical considerations and with a least theoretical foundation. The statistical framework that is being developed in AutoQC4Env aims to complement traditional data quality assessments and provide environmental researchers with a tool that is easy to use but also based on current statistical knowledge.

Najmeh Kaffashzadeh, Felix Kleinert, Martin G. Schultz
Approach to Improving the Quality of Open Data in the Universe of Small Molecules

We describe an approach to improving the quality and interoperability of open data related to small molecules, such as metabolites, drugs, natural products, food additives, and environmental contaminants. The approach involves computer implementation of an extended version of the IUPAC International Chemical Identifier (InChI) system that utilizes the three-dimensional structure of a compound to generate reproducible compound identifiers (standard InChI strings) and universally reproducible designators for all constituent atoms of each compound. These compound and atom identifiers enable reliable federation of information from a wide range of freely accessible databases. In addition, these designators provide a platform for the derivation and promulgation of information regarding the physical properties of these molecules. Examples of applications include, compound dereplication, derivation of force fields used in determination of three-dimensional structures and investigations of molecular interactions, and parameterization of NMR spin system matrices used in compound identification and quantification. We are developing a data definition language (DDL) and STAR-based data dictionary to support the storage and retrieval of these kinds of information in digital resources. The current database contains entries for more than 90 million unique compounds.

John L. Markley, Hesam Dashti, Jonathan R. Wedell, William M. Westler, Eldon L. Ulrich, Hamid R. Eghbalnia
Evaluating the Quantity of Incident-Related Information in an Open Cyber Security Dataset

Data-driven security has become essential in many organisations in their attempt to tackle Cyber security incidents. However, whilst the dominant approach to data-driven security remains through the mining of private and internal data, there is an increasing trend towards more open data through the sharing of Cyber security information and experience over public and community platforms. However, some questions remain over the quality and quantity of such open data. In this paper, we present the results of a recent case study that considers how feasible it is to answer a common question in Cyber security incident investigations, namely that “in an incident, who did what to which asset or victim, and with what result and impact”, for one such open Cyber security database.

Benjamin Aziz, John Arthur Lee, Gulsum Akkuzu
Semantic Data Integration and Quality Assurance of Thematic Maps in the German Federal Agency for Cartography and Geodesy

In this paper we present a new concept of geospatial quality assurance that is currently planned to be implemented in the German Federal Agency of Cartography and Geodesy. Linked open data is being enriched with Semantic Web data in order to create thematic maps relevant to the population. We evaluate the quality of such enriched maps using a standardized process and look at the possible impacts of enriching Semantic Web data with open data sets of the Federal Agency of Cartography and Geodesy.

Timo Homburg, Sebastian Steppan, Falk Würriehausen
Technical Usability of Wikidata’s Linked Data
Evaluation of Machine Interoperability and Data Interpretability

Wikidata is an outstanding data source with potential application in many scenarios. Wikidata provides its data openly in RDF. Our study aims to evaluate the usability of Wikidata as a data source for robots operating on the web of data, according to specifications and practices of linked data, the Semantic Web and ontology reasoning. We evaluated from the perspective of two use cases of data crawling robots, which are guided by our general motivation to acquire richer data for Europeana, a data aggregator from the Cultural Heritage domain. The first use case regards general data consumption applications based on RDF, RDF-Schema, OWL, SKOS and linked data. The second case regards applications that explore semantics relying on Schema.org and SKOS. We conclude that a human operator must assist linked data applications to interpret Wikidata’s RDF because of the choices that were taken at Wikidata in the definition of its expression in RDF. The semantics of the RDF output from Wikidata is “locked-in” by the usage of Wikidata’s own ontology, resulting in the need for human intervention. Wikidata is only a few steps away from high quality machine interpretation, however. It contains extensive alignment data to RDF, RDFS, OWL, SKOS and Schema.org, but a machine interpretation of those alignments can only be done if some essential Wikidata alignment properties are known.

Nuno Freire, Antoine Isaac

SciBOWater Workshop

Frontmatter
Telemetry System for Smart Agriculture

The use of telemetry systems in SMART agriculture is an innovative approach which consists in the implementation of an information system able to provide data on irrigation parameters throughout a year, also taking into consideration other meteorological parameters. The need for a telemetry system for irrigation is emphasized by the market’s interest in having access to fully automated monitoring and automation solutions for energy efficient and cost-effective agricultural crops. This paper aims to present a telemetry system for monitoring crops with an improved architecture from the point of view of very low energy consumption, low management costs, scalability, forecasting functions, and diagnosis. IoT devices are needed in the agriculture sector to monitor plant growth. This paper also brings to attention an analysis performed with an embedded implemented system. Measured data (collected using ADCON station) include air temperatures; relative humidity and soil temperature. These data are visualized and accessed on the IoT platform using an Internet connection. The ADCON station transmits data from the crop area where it is installed.Measurements are performed considering energy efficiency criteria and the technologies available on the market. Enlargement facilities lead to an important technical impact and a high potential for marketing.

C. M. Balaceanu, I. Marcu, G. Suciu
Increasing Collaboration and Participation Through Serious Gaming for Improving the Quality of Service in Urban Water Infrastructure

The transition towards sustainable developments is represented by the current industry trend. From the society perspective, raising awareness about water related problems is of particular interest. While there is an abundance of information available for the general public, a more interactive approach encourages participation in different aspects of government within a Smart City environment. Mobile crowdsensing has emerged as one of the most prominent paradigms for urban sensing, complementing the smart infrastructure. Serious gaming provides the spark in this interaction between the digital citizen and Artificial Intelligence. We consider the pervasive nature of serious gaming as a challenge that requires fusion between tech and non-tech industries. The proposed serious gaming platform combines crowdsensing with augmented reality for increasing active involvement of citizens in smart government. The core application was designed with a focus on rich user experience and game design elements while the game design is defined in the context of urban water infrastructure management and decision support.

Alexandru Predescu, Mariana Mocanu
Information Technology for Ethical Use of Water

In this article, ethical considerations dealing with the proper use and the quality of water are examined based on literature resources. Principles of ethics and challenges when managing water resources are discussed. Subsequently, proposals are made on how information technology can assist assessments and decisions related to water. Additional proposals in the context of water well use relate information systems and artificial intelligence with citizens’ participation and machine learning to predict unexpected or disastrous events.

Panagiotis Christias, Mariana Mocanu

Doctoral Consortium

Frontmatter
Towards a System for Data Transparency to Support Data Subjects

Data Transparency is one of the major challenges that comes with the new European General Data Protection Regulation (GDPR). In the age of Digital Transformation, more applications and processes generate data that needs to be regulated under the requirements of the GDPR. Concepts like Data Ownership or Data Sovereignty becomes more popular and enable independent legal decisions for the data subject. This research-in-progress aims to develop a data transparency system that assists data subjects and companies. Therefore, a research roadmap which includes research questions and the design have been developed. First results, regard to research question one are mentioned. A structured text analysis of the GDPR identifies articles and recitations in the categories Data Transparency and self-control (Data Ownership and Data Sovereignty). In combination, a literature review of 1392 papers shows possible solutions.

Christian Janßen
Towards a Record Linkage Layer to Support Big Data Integration

Record linkage is a crucial step in big data integration (BDI). It is also one of its major challenges with the increasing number of structured data sources that need to be linked and do not share common attributes. Our research-in-progress aims to develop a record linkage layer that assists data scientist in integrating a variety of data sources. A structured literature review of 68 papers reveals (1) key data sets, (2) available classification algorithms (match or no match), and (3) similarity measures to consider in BDI projects. The results highlight the foundational requirements for the development of the record linkage layer such as processing unstructured attributes. As BDI emerges as a priority for industry, our work proposes a record linkage layer that provide similarity measures and integration algorithms while assisting its selection. A record linkage layer can contribute to big data adoption in industry settings and improve quality of big data integration processes to effectively support business decision-making.

Felix Kruse
Incremental Modeling of Supply Chain to Improve Performance Measures

Supply chain modeling is one of the key tools to improve its performance measures. This research follows the principles of Design Science Research (DSR). The paper presents the concept of incremental modeling, which helps quick adaptation of the suply chain model. This method uses Data Science methods and Big Data. Evaluation of the method will be conducted on the franchise network. Hence, new performance measures classification that contains measures specific to the needs of the franchise network, has been developed. In particular measures that allow assessing the level of cooperation around conflicting goals of franchise network participants are included. The results will improve the cooperation of entities within franchise networks.

Szczepan Górtowski, Elżbieta Lewańska
Use of Data Science for Promotion Optimization in Convenience Chain

This paper describes research being conducted in field of promotion planning and optimization for a chain of convenience stores. The motivation for choosing this subject is an important role of promotions in retail market and availability of large amount of data that can be used to improve profitability of promotions. In addition, most of existing studies analyzed promotions in super- and hypermarkets which have a different sales characteristic than a convenience chain. Since transaction amount is typically small (in comparison to transactions in bigger stores), we want to check whether findings from previous studies can be confirmed in our testing environment. In this paper, we show how both internal and external data can be used in order to improve accuracy of forecasts and obtain more reliable performance metrics. The thesis and research goals are presented along with key results of literature review.

Sławomir Mazurowski, Elżbieta Lewańska
Towards a Cross-Company Data and Model Platform for SMEs

The global volume of data is increasing. As a result, companies are increasingly concerned with using the available data and generating added value from it. The development of data products is necessary to obtain information from data and to integrate it into decision making processes. One possibility is the application of artificial intelligence. However, large companies such as Google or Facebook benefit most from this technology. SMEs in particular are falling by the wayside and are confronted with many challenges. The cross-company platform presented in this article represents an approach to enable even smaller companies to access artificial intelligence and to support data management in machine learning projects.

René Kessler
Touchscreen Behavioural Biometrics Authentication in Self-contained Mobile Applications Design

The article presents the research connected with developing a mobile touchscreen behavioural biometrics solution that may be applicable for authentication and improving transaction security of financial services. The article aims to present the research approach and a literature review that identified research gaps and performed a critical analysis of previous results. The goal is to suggest possible improvements over the existing methods in the literature. The motivation, methodology and main problem statements of the aforementioned research are presented, focusing on the characteristics of behavioural biometrics methods. The main contribution of the article consists of the literature review focused on the characteristics of the approaches used, differences in results caused by the evaluation criteria of the research processes and their comparability. Based on it insights are derived which can be used to build touchscreen based authentication method and validate the results.

Piotr Kałużny
Data-Based User’s Personality in Personalizing Smart Services

Currently, a lot of attention and commitment are paid to improving customer experience with services. More and more proposed solutions are data-based, human-centred services. The purpose of this article is to present the assumptions and evidence in order to create a method that will allow for customization of the service functionalities (e.g. a smart home service with a virtual personal assistant) to the nature of the user (personality in the Big 5 model). The article will present the results of the preliminary research concerning users’ differentiation of needs, the definition of research problems and the idea for a research scheme. The aim of the research is creating a model that classifies users based on their personality indicated from mobile phone data. What distinguishes the proposed solution from others, is that many types of different data are used (call logs, photos, applications, data from telephone usage history, etc.), which can be beneficial for assessment accuracy. Additionally, the proposed method allows for adapting the service to the user needs from the beginning of usage, without the necessity of collecting data about user activity.

Izabella Krzeminska
Backmatter
Metadaten
Titel
Business Information Systems Workshops
herausgegeben von
Prof. Witold Abramowicz
Rafael Corchuelo
Copyright-Jahr
2019
Electronic ISBN
978-3-030-36691-9
Print ISBN
978-3-030-36690-2
DOI
https://doi.org/10.1007/978-3-030-36691-9