Skip to main content

Über dieses Buch

This book contains the refereed proceedings of the 18th International Conference on Business Information Systems, BIS 2015, held in Poznań, Poland, in June 2015. The BIS conference series follows trends in academic and business research; thus, the theme of the BIS 2015 conference was “Making Big Data Smarter.” Big data is now a fairly mature concept, recognized and widely used by professionals in both research and industry. Together, they work on developing more adequate and efficient tools for data processing and analyzing, thus turning "big data" into "smart data."

The 26 revised full papers were carefully reviewed and selected from 70 submissions. In addition, two invited papers are included in this book. They are grouped into sections on big and smart data, semantic technologies, content retrieval and filtering, business process management and mining, collaboration, enterprise architecture and business−IT alignment, specific BIS applications, and open data for BIS.



Big and Smart Data


A Parallel Approach for Decision Trees Learning from Big Data Streams

In this paper we introduce PdsCART, a parallel decision tree learning algorithm. There are three characteristics that are important to emphasize and make this algorithm particularly interesting. Firstly, the algorithm we present here can work with streaming data, i.e. one pass over data is sufficient to construct the tree. Secondly, the algorithm is able to process in parallel a larger amount of data stream records and can therefor handle efficiently very large data sets. And thirdly, the algorithm can be implemented in the MapReduce framework. Details about the algorithm and some basic performance results are presented.
Ionel Tudor Calistru, Paul Cotofrei, Kilian Stoffel

Industry 4.0 - Potentials for Creating Smart Products: Empirical Research Results

Industry 4.0 combines the strengths of traditional industries with cutting edge internet technologies. It embraces a set of technologies enabling smart products integrated into intertwined digital and physical processes. Therefore, many companies face the challenge to assess the diversity of developments and concepts summarized under the term industry 4.0. The paper presents the result of a study on the potential of industry 4.0. The use of current technologies like Big Data or cloud-computing are drivers for the individual potential of use of Industry 4.0. Furthermore mass customization as well as the use of idle data and production time improvement are strong influence factors to the potential of Industry 4.0. On the other hand business process complexity has a negative influence.
Rainer Schmidt, Michael Möhring, Ralf-Christian Härting, Christopher Reichstein, Pascal Neumaier, Philip Jozinović

Evaluating New Approaches of Big Data Analytics Frameworks

The big data topic will be one of the leading growth markets in information technology in the next years. One problem in this area is the efficient computation of huge data volumes, especially for complex algorithms in data mining and machine learning tasks. This paper discuss new processing frameworks for big and smart data in distributed environments and presents a benchmark between two frameworks - Apache Flink and Apache Spark - based on a mixed workload with algorithms from different analytic areas with different real-world datasets.
Norman Spangenberg, Martin Roth, Bogdan Franczyk

Automated Equation Formulation for Causal Loop Diagrams

The annotation of Business Dynamics models with parameters and equations, to simulate the system under study and further evaluate its simulation output, typically involves a lot of manual work. In this paper we present an approach for automated equation formulation of a given Causal Loop Diagram (CLD) and a set of associated time series with the help of neural network evolution (NEvo). NEvo enables the automated retrieval of surrogate equations for each quantity in the given CLD, hence it produces a fully annotated CLD that can be used for later simulations to predict future KPI development. In the end of the paper, we provide a detailed evaluation of NEvo on a business use-case to demonstrate its single step prediction capabilities.
Marc Drobek, Wasif Gilani, Thomas Molka, Danielle Soban

An Empirical Study on Spreadsheet Shortcomings from an Information Systems Perspective

The use of spreadsheets to support business processes is widespread in industry. Due to their criticality in the context of those business processes, spreadsheet shortcomings can significantly hamper an organization’s business operation. However, it is still unclear, what actual and typical shortcomings of spreadsheets applied as information systems are. Therefore, in this paper we present the results of an empirical study on spreadsheet shortcomings from an information systems perspective. In this sense, we focus particularly on how spreadsheets perform in their respective business context. The result of our work is a list of 20 shortcomings which typically occur in practice.
Thomas Reschenhofer, Florian Matthes

A Mark Based-Temporal Conceptual Graphs for Enhancing Big Data Management and Attack Scenario Reconstruction

The management of big data is mainly affected by the size of the big graph data that represents the huge volumes of data. The size of this structure may increase with the size of data to be handled over the time. Facing this issue, the querying time may be affected and the introduced delay may not be tolerated by running applications. Moreover, the investigation of attacks through the collected massive data could not be ensured using traditional approaches, which do not support big data constraints. In this context, we propose in this paper, a novel temporal conceptual graph to represent the big data and to optimize the size of the derived graph. The proposed scheme built on this novel graph structure enables tracing back of attacks using big data. The efficiency of the proposed scheme for the reconstruction of attack scenarios is illustrated using a case study in addition to a conducted comparative analysis showing how smart big graph data is obtained through the optimization of the graph size.
Yacine Djemaiel, Boutheina A. Fessi, Noureddine Boudriga

Semantic Technologies


Ontology-Based Creation of 3D Content in a Service-Oriented Environment

In this paper, an environment for semantic creation of interactive 3D content is proposed. The environment leverages the semantic web techniques and the SOA paradigm to enable ontology-based modeling of 3D objects and scenes in distributed web-based environments. The proposed solution combines visual and declarative approaches to 3D content creation. The possible use of various ontologies and knowledge bases permits modeling of content by users with different skills and at different levels of abstraction, thus significantly extending possible use of 3D on the web in various business scenarios.
Jakub Flotyński, Krzysztof Walczak

Modeling and Querying Sensor Services Using Ontologies

We propose in this paper a service description meta-model for describing services from a functional and non-functional perspectives. The model is inspired from the frame based modeling technique and is serialized in RDF (Resource Description Framework) using Linked Data principles. We apply this model for describing sensor services: modeling sensors and their readings enriched with non-functional properties. We also define a complete architecture for managing sensor data: collection, conversion, enrichment and storage. We tested our prototype using live streams of sensors readings. The paper also reports on the required time and storage size during the management and querying of sensor data.
Sana Baccar, Wassim Derguech, Edwards Curry, Mohamed Abid

Validation and Calibration of Dietary Intake in Chronic Kidney Disease: An Ontological Approach

This study develops a pilot knowledge-based system (KBS) for addressing validation and calibration of dietary intake in chronic kidney disease (CKD). The system is constructed by using Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL) to demonstrate how a KBS approach can achieve sound problem solving modeling and effective knowledge inference. In terms of experimental evaluation, data from 36 case patients are used for testing. The evaluation results show that, excluding the interference factors and certain non-medical reasons, the system has achieved the research goal of CKD dietary consultation. For future studies, the problem solving scope can be expanded to implement a more comprehensive dietary consultation system.
Yu-Liang Chi, Tsang-Yao Chen, Wan-Ting Tsai

Content Retrieval and Filtering


Evaluation of the Dynamic Construct Competition Miner for an eHealth System

Business processes of some domains are highly dynamic and increasingly complex due to their dependencies on a multitude of services provided by various providers. The quality of services directly impacts the business process’s efficiency. A first prerequisite for any optimization initiative requires a better understanding of the deployed business processes. However, the business processes are either not documented at all or are only poorly documented. Since the actual behaviour of the business processes and underlying services can change over time it is required to detect the dynamically changing behaviour in order to carry out correct analyses. This paper presents and evaluates the integration of the Dynamic Construct Competition Miner (DCCM) as process monitor in the TIMBUS architecture. The DCCM discovers business processes and recognizes changes directly from an event stream at run-time. The evaluation is carried out in the context of an industrial use-case from the eHealth domain. We will describe the key aspects of the use-case and the DCCM as well as present the relevant evaluation results.
David Redlich, Mykola Galushka, Thomas Molka, Wasif Gilani, Gordon Blair, Awais Rashid

Dynamic Reconfiguration of Composite Convergent Services Supported by Multimodal Search

Composite convergent services integrate a set of functionalities from Web and Telecommunication domains. Due to the big amount of available functionalities, automation of composition process is required in many fields. However, automated composition is not feasible in practice if reconfiguration mechanisms are not considered. This paper presents a novel approach for dynamic reconfiguration of convergent services that replaces malfunctioning regions of composite convergent services considering user preferences. In order to replace the regions of services, a multimodal search is performed. Our contributions are: a model for representing composite convergent services and a region-based algorithm for reconfiguring services supported by multimodal search.
Armando Ordóñez, Hugo Ordóñez, Cristhian Figueroa, Carlos Cobos, Juan Carlos Corrales

On Extracting Information from Semi-structured Deep Web Documents

Some software agents need information that is provided by some web sites, which is difficult if they lack a query API. Information extractors are intended to extract the information of interest automatically and offer it in a structured format. Unfortunately, most of them rely on ad-hoc techniques, which make them fade away as the Web evolves. In this paper, we present a proposal that relies on an open catalogue of features that allows to adapt it easily; we have also devised an optimisation that allows it to be very efficient. Our experimental results prove that our proposal outperforms other state-of-the-art proposals.
Patricia Jiménez, Rafael Corchuelo

A Novel Approach to Web Information Extraction

Business Intelligence requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers. The Web is the largest source of information nowadays. Unfortunately, the information it provides is available in semi-structured human-friendly formats, which makes it difficult to be processed by automated business processes. Classical propositional and ILP machine-learning techniques have been applied for this purpose. However, the former have not enough expressive power, whereas the latter are more expressive but intractable with large datasets. Propositionalisation was devised as a means to provide propositional techniques with more expressive power, enabling them to exploit structural information in a propositional way that allows them to be efficient. In this paper, we present a proposal to extract information from semi-structured web documents that uses this approach. It leverages a classical propositional machine learning technique and enhances it with the ability to learn from an unbounded context, which helps increase its precision and recall. Our experiments prove that our proposal outperforms other state-of-art techniques in the literature.
Antonia M. Reina Quintero, Patricia Jiménez, Rafael Corchuelo

Business Process Management and Mining


Mining Multi-variant Process Models from Low-Level Logs

Process discovery techniques are a precious tool for analyzing the real behavior of a business process. However, their direct application to lowly structured logs may yield unreadable and inaccurate models. Current solutions rely on event abstraction or trace clustering, and assume that log events refer to well-defined (possibly low-level) process tasks. This reduces their suitability for logs of real BPM systems (e.g. issue management) where each event just stores several data fields, none of which fully captures the semantics of performed activities. We here propose an automated method for discovering an expressive kind of process model, consisting of three parts: (i) a logical event clustering model, for abstracting low-level events into classes; (ii) a logical trace clustering model, for discriminating among process variants; and (iii) a set of workflow schemas, each describing one variant in terms of the discovered event clusters. Experiments on a real-life data confirmed the capability of the approach to discover readable high-quality process models.
Francesco Folino, Massimo Guarascio, Luigi Pontieri

On Improving the Maintainability of Compliance Rules for Business Processes

Business process regulatory compliance management (RCM) is ensuring that the business processes of an organization are in accordance with laws and other domain-specific regulations. In order to achieve compliance, various approaches advocate checking process models using formal compliance rules that are derived from regulations. However, this shifts the problem of ensuring compliance to the rules - for example, the derived rules have to be updated in the case that regulations are changed. In this paper we show how existing RCM solutions can be extended with traceability between compliance rules and regulations. Traceability supports the alignment of regulations and rules and thus helps improving the overall maintainability of compliance rules.
Sven Niemand, Sven Feja, Sören Witt, Andreas Speck

Evolutionary Computation Based Discovery of Hierarchical Business Process Models

Business process models that describe how the execution of work in a business is structured are an important asset of modern enterprises. They serve as documentation, and, if easily understandable, allow process stakeholders to make better decisions on the business process. Traditionally, these models have been created manually after analyzing the process, which can lead to outdated information when changes are introduced into the process. Today, information systems connected to the business processes log event data reflecting the real execution of the processes, and process discovery techniques have been developed to automatically extract models from these event logs. Most of these techniques discover well formalized models such as Petri nets, which can be hard to understand in case of larger process models. The evolutionary computation based approach presented in this paper discovers process models complying to the specification of BPMN, one of the most used but not well formalized notations for documenting business processes. Our approach limits the set of possible process models to hierarchically structured models, and therefore facilitates well structured and simple results. An evaluation with eight event logs shows that, despite the limitation to well structured and simple models, the approach delivers competitive results when compared with other process discovery techniques.
Thomas Molka, David Redlich, Wasif Gilani, Xiao-Jun Zeng, Marc Drobek



Empowering End-Users to Collaboratively Structure Processes for Knowledge Work

Knowledge work is becoming the predominant type of work in many countries and is involved in the most important processes in organizations. Despite its increasing importance business information systems still lack appropriate support for knowledge-intensive processes, since existing workflow management solutions are too rigid and provide no means to deal with unpredictable situations. Future business information systems that attempt to improve this support need to solve the problem of facilitating non-expert users to structure their processes. The recently published Case Management Model and Notation (CMMN) might overwhelm non-expert users. Our research hypotheses is that end-users can be empowered to structure processes for knowledge work. In an evaluation with two student teams working on a software development project our solution improved the information structure, reproducibility, progress visualization, work allocation and guidance without losing the flexibility compared to a leading agile project management tool.
Matheus Hauder, Rick Kazman, Florian Matthes

Social Collaboration Solutions as a Catalyst of Consumer Trust in Service Management Platforms - A Research Study

We can observe a relatively low popularity of service management platforms (SMPs). Therefore is a need for searching causes of this situation and remedies. In the course of the conducted research, the authors identified a significant problem of lack of social trust in SMPs and service providers on the Internet. One of the ways to reduce its scale is to use social collaboration solutions (SCSs). Relying on transparency of activities and on social and professional relations, they can bring the consumer closer to the service provider and its customer community by receiving various stimuli resulting from social interactions.
The aim of this paper is to determine consumer attitudes towards SMPs and to identify and analyse the impact of determinants of the usefulness of SCSs in building trust in service providers and SMPs. The questionnaire survey conducted by means of CSAQ, the scope of which embraced the Polish market, identified consumer experiences with SMPs to date and their expectations of such platforms.
This paper is part of a project funded by the National Science Centre awarded on the basis of the decision number DEC-2011/03/HS4/04291.
Łukasz Łysik, Robert Kutera, Piotr Machura

The Relative Advantage of Collaborative Virtual Environments and Two-Dimensional Websites in Multichannel Retail

Collaborative Virtual Environments (CVE) have been with us for some years however their potential is unclear. This research attempts to achieve a better understanding of retail in CVEs from the consumer viewpoint by comparing this channel with the competing retail channels of ‘bricks and mortar’, or offline, and two dimensional navigation websites (2D websites), in order to identify their respective Relative Advantages (RA). Five categories of RA between retail channels were identified and explored using focus groups and interviews. These five categories explore distinct characteristics of each channel, consumer preferences over three stages of a purchase, differences between simple and complex products and lastly the role of trust. Participants showed a preference for offline and 2D in most situations however there was evidence that enjoyment, entertainment, sociable shopping, the ability to reinvent yourself, convenience and institutional trust were RA of CVEs in comparison to one of the other two channels.
Alex Zarifis, Angelika Kokkinaki

Enterprise Architecture and Business-IT-Alignment


Combining Business Processes and Cloud Services: A Marketplace for Processlets

Context: Business agility has been approached on BPM-level using process reuse and on execution-level using cloud computing. Objective: We combine these levels linking business processes repositories and cloud-based services. The goal is to provide a framework for BPaaS marketplaces by introducing Processlets as cloud-based implementations of business processes. Method: We design the marketplace’s components and functionalities based on existent literature and survey relevant technologies. Results: We identified the features of the marketplace’s components and discuss suitable technologies for implementation. Conclusion: Our framework fosters reuse of business processes and mitigates the problem of a scattered cloud market.
Danillo Sprovieri, Sandro Vogler

The Nature and a Process for Development of Enterprise Architecture Principles

Enterprise architecture management (EAM) is expected to contribute to, e.g., strategic planning and business-IT-alignment by capturing the essential structures of an enterprise. Enterprise architecture principles (EAPs) are among the subjects in EAM research that have received increasing attention during the last years, but still are not fully covered regarding their characteristics and their development and use in practice. The aim of this paper is to contribute to the EAM field by investigating the nature of EAPs and by proposing and validating a development process for EAPs. The main contributions of the paper are (a) an analysis of the characteristics of EAPs, (b) an initial development process for EAPs, and (c) the results of expert interviews for validating this process.
Kurt Sandkuhl, Daniel Simon, Matthias Wißotzki, Christoph Starke

Interrelations Between Enterprise Modeling Focal Areas and Business and IT Alignment Domains

Efficient support of business needs, processes and strategies by information technology is a key for successful enterprise functioning. The challenge of Business and IT Alignment (BITA) has been acknowledged and actively discussed by academics and practitioners during more than two decades. On one hand, in order to achieve BITA it is required to analyse an enterprise from multiple perspectives. On the other hand, it is also required to deal with multiple points of views of involved stakeholders and create a shared understanding between them. In relation to both of these needs EM is considered as a useful practice, as it allows representation of various focal areas of an enterprise and facilitates consensus-driven discussion between involved stakeholders. Various focal areas of an enterprise are linked to various domains that BITA include, for example, time and function focal areas from the Zachman Framework contribute to analysis of business domain of BITA. This and other interrelations are investigated in this study, which is done by conceptual positioning of Zachman’s focal areas on Information System strategy triangle and Generic Framework for Information Management.
Julia Kaidalova, Elżbieta Lewańska, Ulf Seigerroth, Nikolay Shilov

Specific BIS Applications


Multi-dimensional Performance Measurement in Complex Networks – Model and Integration for Logistics Chain Management

Global demands and emerging markets resulted in a rise of complexity in logistics over the last years. Furthermore, producing companies outsourced their logistics for reasons of flexibility. Hence, clients consult solution specialists for customer-focused end-to-end processes. The 4th party logistics business model offers an overall service management throughout an entire logistics chain by planning and controlling inherent processes. But because physical provisioning is outsourced to 3rd party providers, performance measurement is essential in order to meet service levels. Measuring sub providers poses difficulties because of different process definitions and communication standards. A relief is the growing amount of information during the process lifecycle. In this paper we present an integrated, strategically aligned provider rating model, applicable throughout the entire logistics process and capable of summing up operational information into ratios that are further aggregated into key performance indicators. Thereafter, we prove its applicability through a prototypical implementation.
Lewin Boehlke, André Ludwig, Michael Gloeckner

Engaging Facebook Users in Brand Pages: Different Posts of Marketing-Mix Information

This research explores the posting quantity and posting types most likely to create brand engagement on Facebook brand-fan pages. Using content analysis, the study explores five posting types: product, price, place, promotion, and others, by analyzing 1,577 posts from 183 brand-fan pages. Findings suggest that high posting amount could increase brand popularity. Thus, a brand-fan page’s content provider should add more price information, promotional information, other informational content, and emotional content on posted messages. This work amends the social media marketing literature by including the concept of marketing mix along with uses and gratifications (U&G). It also combines likes, comments, and shares to represent the post popularity. The study provides some managerial implications for effectively planning the content strategy to engage more fans or potential customers in brand-fan pages.
Mathupayas Thongmak

Context Variation for Service Self-contextualization in Cyber-Physical Systems

Operation and configuration of Cyber-Physical Systems (CPSs) require approaches for managing the variability at design time and the dynamics at runtime caused by a multitude of component types and changing application environments. As a contribution to this area, this paper proposes to integrate concepts for variability management with approaches for self-organization in intelligent systems. Our approach exploits the idea of self-contextualization to autonomously adapt behaviors of multiple services to the current situation. More concrete, we put the “context” of CPS into the conceptual focus of our approach and propose context variants for use in self-contextualization of CPS. The main contributions of this paper are to identify challenges in variability management of CPS based on an industrial case, the integration of context variants into the reference model for self-contextualizing services and an initial validation using a case study.
Alexander Smirnov, Kurt Sandkuhl, Nikolay Shilov, Nikolay Telsya

EPOs, an Early Warning System for Post-merger IS Integration

Our approach allows for identifying and assessing indicators that signal the risk of failure in post-merger integration of information systems with the help of an early warning system. The applied research method is Design Science Research complemented by a real case study. The main artifact is the EPOs prototype we developed, applying semantic technologies, namely an ontology and an inference engine, for the automatic identification, validation and quantification of risk indicators. If a certain threshold is reached, a warning message is triggered including the ponderosity of the risk to fail. We evaluated our approach based on two real world scenarios.
Erol Unkan, Barbara Thönssen

Open Data for BIS


The Adoption of Open Data and Open API Telecommunication Functions by Software Developers

The primary objective of the work is the preliminary investigation of the adoption of Open Data and Open API telecommunication functions by software developers. The analysis is based on the statistical data collected during developer contests. Based on Open API Hackathon and Business Intelligence API Hackathon contests, the interest of software developers in telecommunication operator functions and Open Data exposed in Open APIs form is assessed. Conclusions on the categories of open city data attracting the attention of software developers are formulated.
Sebastian Grabowski, Maciej Grzenda, Jarosław Legierski

The Use of Open API in Travel Planning Application – Tap&Plan Use Case

The paper presents a new solution for the problem of getting around the foreign city using the mobile application as an assistant. The app uses data provided through Orange APIs, as well as other data sources available in the Internet to reach the goal.
Michał Karmelita, Marcin Karmelita, Piotr Kałużny, Piotr Jankowiak


Weitere Informationen

Premium Partner