Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the five workshops that were organized in conjunction with the International Conference on Business Information Systems, BIS 2015, which took place in Poznan, Poland, in June 2015. The 26 papers in this volume were carefully reviewed and selected from 56 submissions and were revised and extended after the event. The workshop topics covered knowledge-based business information systems (AKTB), business and IT alignment (BITA), transparency-enhancing technologies and privacy dashboards (PTDCS), semantics usage in enterprises (FSFE), and issues related to DBpedia. In addition two keynote papers are included in this book.

Inhaltsverzeichnis

Frontmatter

Keynote Speech

Frontmatter

Listening to and Visualising the Pulse of Our Cities Using Social Media and Call Data Records

Abstract
Methods and technologies exist to capture a sharp digital reflection of a city and to track its evolution with a decreasing delay, but decision makers will not benefit from our ability to feel the pulse of a city unless we provide them with tools to visually perceive emerging patterns and observe their dynamics. This keynote reports on a two year long experience in feeling the pulse of Milan with CitySensing. This system–developed by Politecnico di Milano and Telecom Italia–allows fusing and visually making sense of social media streams and privacy-preserving aggregates of Call Data Records. The effectiveness of CitySensing was demonstrated during Milan Design Week 2013 and 2014 by realising a visual analytics dashboard for event managers and a public storytelling installation.
Emanuele Della Valle, Marco Balduini

The Journey is the Reward - Towards New Paradigms in Web Search

Abstract
Without search engines the information content of the World Wide Web would remain largely closed for the ordinary user. Current web search engines work well as long as the user knows what she is looking for. The situation becomes problematic, if the user has insufficient expertise or prior knowledge to formulate the search query. Often a sequence of search requests is necessary to answer the user’s information needs, whenever knowledge has to be accumulated first to determine the next search query. On the other hand, retrieval systems for traditional archives face the problem that there is possibly not always a result for an arbitrary search query, simply because of the limited number of documents available. Semantic search systems (try to) determine the meaning of the content of the archived documents first and thus in principle are able to overcome problems of traditional keyword-based search engines concerning the processing of natural language. Moreover, content-based relationships among the documents can be used to filter, navigate, and explore the archive. Content-based ‘intelligent’ recommendations help to open up the archive and to discover new paths across the search space.
Harald Sack

AKTB Workshop

Frontmatter

Quantitative Research in High Frequency Trading for Natural Gas Futures Market

Abstract
High frequency trading (HFT) in micro or milliseconds has recently drawn attention of financial researches and engineers. In nowadays algorithmic trading and HFT account for a dominant part of overall trading volume. The main objective of this research is to test statistical arbitrage strategy in HFT natural gas futures market. The arbitrage strategy attempts to profit by exploiting price differences between successive futures contracts of the same underlying asset. It takes long/short positions when the spread between the contracts widens; hoping that the prices will converge back in the near future. In this study high frequency bid/ask and last trade records were collected from NYMEX exchange. The strategy was back tested applying MatLab software of technical computing. Statistical arbitrage and HFT has given positive results and refuted the efficient market hypothesis. The strategy can be interesting to financial engineers, market microstructure developers or market participants implementing high frequency trading strategies.
Saulius Masteika, Mantas Vaitonis

Preserving Consistency in Domain-Specific Business Processes Through Semantic Representation of Artefacts

Abstract
Large organizations today face a growing challenge of managing heterogeneous process collections containing business processes. Explicit semantics inherent to domain-specific models can help alleviate some of the management challenges. Starting with concept definitions, designers can create domain specific processes and eventually generate industry-standard BPMN for use in BPMS solutions. However, any of these artefacts (concepts, domain processes and BPMN) can be modified by various stakeholders and changes done by one person may influence models used by others. There is therefore a need for tool support to aid in keeping track of changes done and their impacts on different stakeholders. In this paper we present an approach towards providing such support based on a semantic layer that records the provenance of the information and accordingly propagates impacts of changes to related resources, and illustrate the applicability of the approach via an illustrative example.
Nikolaos Lagos, Adrian Mos, Jean-Yves Vion-Dury, Jean-Pierre Chanod

Decision Model for the Use of the Application for Knowledge Transfer Support in Manufacturing Enterprises

Abstract
This article elaborates a decision-making model for the effective use of the application for knowledge transfer support in manufacturing enterprises by using the GMDH method. It focuses on the set of the characteristics of knowledge workers in manufacturing companies and is based on a survey and data obtained from 119 Polish manufacturing enterprises. This article develops a framework of how knowledge workers can determinate the knowledge transfer in a manufacturing company and further discusses the research results.
Justyna Patalas-Maliszewska, Irene Krebs

Additional Knowledge Based MOF Architecture Layer for UML Models Generation Process

Abstract
As organizations grow and IS get bigger it is needed to create accurate and complicated analyzes for developing systems. This analyzes are base for design of business models and system architecture. Designers every day face new challenges when they need to collide and understand other designers’ and analytics’ made models. This process creates new problems and mistakes. Automatization of IS engineering process lets create better and more qualified models with less mistakes. The article is about MOF architecture’s role in IS engineering process and about possibilities to improve MOF architecture’s composition with new knowledge based layer. The main scope of the article is to analyze improved MOF architecture’s usage in UML models generation with transformation algorithms from enterprise model process.
Ilona Veitaite, Audrius Lopata

A Cluster Analysis on the Default Determinants in the European Banking Sector

Abstract
The aim of this paper is to identify the relationship between banks’ probability of default and their risk taking incentives. Exploring a large set of bank level financial data from 203 European banks during 2005–2013 we apply the cluster analysis. The results indicate a number of two very different groups inside the dataset within each year, either using the hierarchical trees or the k-means clustering algorithms. Also, the composition of the clusters remains unchanged during crisis years compared with the pre-crisis for the vast majority of the instances. Finally, when mapping the clusters to the distance to default computed through the z-score variable, we show that banks with large size and high liquidity risk enhance their default risk.
Darie Moldovan, Simona Mutu

Process Optimization and Monitoring Along Big Data Value Chain

Abstract
The article deals with Big Data (BD), which is big not only by its volume, but also by velocity or variety - as the combined effect of specific characteristics, create different “portraits” of BD in various domains challenging value extraction from data. Big Data value chain means that the enterprises have to raise skills for dealing with data in all stages of its life cycle: starting from recognizing need to register and store data items, moving forward to their appropriate representation and visualization, processing data with the help of best-fit algorithms, applying methods in order to get insights, finding valuable decisions in uncertain situations, and elaborating tools for control of effectiveness of BD value chain processes. We will follow all the entirety of the processes used for BD monitoring along its value chain and optimize them for extracting highest possible value. The goal of paper is to describe innovative solutions in domain driven process optimization and monitoring along BD value chain. We analyse the specific characteristics of BD by suggested BD portrait concept, its impact for BD analysis along entire value chain, and transferring research results to other research domains. The presented case-study highlights the problems of practical big data value chain implementation.
Dalia Kriksciuniene, Virgilijus Sakalauskas, Balys Kriksciunas

BITA Workshop

Frontmatter

Means for Building Models to Align Information Systems Support to Specific Application Domains

Abstract
Information Systems (IS) are today the cornerstone of modern organizations, in which they support specific application domains or business areas that can give a strategic advantage. In this context, strategic alignment that exists when the IS and business goals and activities are in concordance, becomes crucial. Several approaches have been proposed for building strategic alignment. However, for aligning IS support to specific application domains it is necessary to deal with their specific characteristics. Reviewing the literature, strategic alignment approaches addressing these specific characteristics have been proposed by extending the Strategic Alignment Model (SAM) [1]. Nevertheless, means for building new models from the SAM constitutive elements have not been proposed yet. To cope with this lack, we propose two metamodels: the SAM static view metamodel and the SAM dynamic view metamodel and a methodology to use them. An illustrating example shows the applicability of our approach.
Oscar Avila, Virginie Goepp

Business and IT Alignment in Turbulent Business Environment

Abstract
Business and IT alignment is currently indicated as most important concern of companies because it is considered as a key concept to ensuring the requirements of a rapidly changing business. The goal of alignment in such circumstances can be defined as an alignment of speed of enterprise’s change with speed of environment’s change. Alignment model for companies operated in turbulent business environment is proposed here. That model assumes coexistence of two flows of changes: “top-down” flow from business model to operational level and “bottom-up” flow, which adapts organization to minor changes. Model of change that includes three phases (change detection, modeling, and solution implementation) is developed and key factors that determine time of change are discussed. Quantitative parameters, which can help to detect changes, are also proposed.
Yuri Zelenkov

Lightweight Metrics for Enterprise Architecture Analysis

Abstract
The role of an Enterprise Architecture model is not limited to a graphical representation of an organization and its dynamics. Rather, it is also a tool for analysis and rational decision making. If firms do not use their enterprise architecture model to aid decision making then they run the risk of underutilizing its true potential. This paper proposes seven easily computable metrics to measure criticality and impact of any element in an Enterprise Architecture model. These metrics will aid managers in decision making and are suitable for board room discussions.
Prince M. Singh, Marten J. van Sinderen

Experiences from Selecting a BPM Notation for an Enterprise

Abstract
Much research work in business process modeling (BPM) has been spent on determining which notation is most suitable for industrial practice. However, a lot of this work was performed in academic environments instead of industrial settings. This paper introduces and discusses a case of selecting a BPM notation for a medium-sized enterprise from utility industries. The steps taken in this decision making process include the analysis of requirements originating from regulation in the domain, a survey among the future users of the notation, and the evaluation of candidate notations by expert users. The main contributions of this paper are (1) a case study from energy industries showing issues and challenges when deciding on the most suitable notation, (2) a survey comparing the understandability of notations from the users’ perspective and (3) experiences from the decision making process.
Kurt Sandkuhl, Jörn Wiebring

Workplace Innovation in Swedish Local Organizations - Technology Aspect

Abstract
Workplace innovation (WI) is important to provide better work opportunities and increase productivity. WI at the individual task level concerns the structure of individual work tasks. A number of surveys have been done that measured WI at the individual task level, however they paid little attention to work environment, in particular to supportive technology. This paper presents the case study of WI in two Swedish organisations with focus on the alignment of ICT and the individual work tasks. We carried out seven interviews of workers at different levels of job and in different sectors. The qualitative data analysis identified four themes: business processes, working roles, data sources, and technology. The analysis was facilitated by constructing BPMN (Business Process Model Notation) diagrams for the identified business processes. We discovered that the supportive technology in the organisations is adequate but downright traditional. We argue that technology is an important factor and enabler for WI. Finally, we present an architectural model that provides a direction for future work on WI taking ICT as the basis.
He Tan, Andrea Resmini, Vladimir Tarasov, Anders Adlemo

Cyber-Physical Systems in an Enterprise Context: From Enterprise Model to System Configuration

Abstract
Cyber-Physical Systems (CPS) can provide for significant benefits in different areas of human activities. However, the existing gap between methods, modelling approaches and viewpoints of the disciplines involved into CPS development creates significant difficulties for creating viable CPS solutions. In this paper we are trying to bridge this gap via linking business model and system design and configuration disciplines. It is investigated how a business model captured in an enterprise model can be used for configuring a CPS which is part of implementing the business model. The results are illustrated on an example from the transportation industry.
Kurt Sandkuhl, Alexander Smirnov, Nikolay Shilov

FSFE Workshop

Frontmatter

Improving Document Exchanges in the Supply Chain

Abstract
In order to help businesses to communicate fruitfully, we present a solution based on ontology alignment for integrating business documents. We focus on detecting and resolving semantic conflicts encountered during the integration process due to different terminologies used in xCBL, cXML and RosettaNet. Our contribution is to benefit from research in the ontology alignment area and considered as empirical study to test if alignment solution can overcome the heterogeneity problems between business systems. As case study, we apply alignment on purchase order ontologies, a common task of the supply chain.
Jamel Eddine Jridi, Guy Lapalme

Mobile@Old an ADL Solution

Abstract
Mobile@Old is a friendly low cost intelligent AAL platform designed to meet the needs of elderly users with the purpose to assist old people with their daily activities, maintaining physical and cognitive fitness, maintaining connection to their close ones while increasing their safety, autonomy, self-confidence and mobility. Our paper considers 4 main scenarios emphasized by interviews conducted with the elders and caregivers: 1. Med (Medicine) designed for monitoring vital parameters and assessing the situation of the person who forgets to take their medicine in order to improve their condition, 2. Rem (Reminder scenarios) focused on problems related to cognitive ageing, 3. VSM (Vital Sign Monitoring) - activity analysis for monitoring vital parameters using medical expertise and observed behaviors, 4. PAT (Physical Activity Trainer) - consists of recommendations on performing additional exercise, if it detects a low level of physical activity. This solution is elder-centered, taking into account: the individual particularities, illness, level of acceptance and usability.
Lucia Rusu, Sergiu Jecan, Dan Sitar

Aspect OntoMaven - Aspect-Oriented Ontology Development and Configuration with OntoMaven

Abstract
In agile ontology-based software engineering projects support for modular reuse of ontologies from large existing remote repositories, ontology project life cycle management, and transitive dependency management are important needs. The contribution of this paper is a new design artifact called OntoMaven combined with a unified approach to ontology modularization, aspect-oriented ontology development, which was inspired by aspect-oriented programming. OntoMaven adopts the Apache Maven-based development methodology and adapts its concepts to knowledge engineering for Maven-based ontology development and management of ontology artifacts in distributed ontology repositories. The combination with aspect-oriented ontology development allows for fine-grained, declarative configuration of ontology modules.
Adrian Paschke, Ralph Schaefermeier

PTDCS Workshop

Frontmatter

A Rule-Based Approach for Detecting Location Leaks of Short Text Messages

Abstract
As of today, millions of people share messages via online social networks, some of which probably contain sensitive information. An adversary can collect these freely available messages and specifically analyze them for privacy leaks, such as the users’ location. Unlike other approaches that try to detect these leaks using complete message streams, we put forward a rule-based approach that works on single and very short messages to detect location leaks. We evaluated our approach based on 2817 tweets from the Tweets2011 data set. It scores significantly better (accuracy = 84.95 %) on detecting whenever a message reveals the user’s location than a baseline using machine learning and three extensions using heuristic. Advantages of our approach are not only to apply for online social network messages but also to extend for other areas (such as email, military, health) and for other languages.
Hoang-Quoc Nguyen-Son, Minh-Triet Tran, Hiroshi Yoshiura, Noboru Sonehara, Isao Echizen

Formal Reasoning About Privacy and Trust in Loyalty Systems

Abstract
Individuals disclose personal information to complex services via their front-end which interacts with underlying sub-services. The services involve multiple collaborating parties that may share the collected personal data to accurately profile individuals. Even though their data handling practices are declared in their privacy policies, they are still opaque for individuals. Data protection regulations restrain service providers to collect personal data that is strictly necessary for their purposes. The present paper shows the potential of a logic based framework for analyzing privacy of electronic services by applying the approach to two loyalty schemes. Different query types are defined that provide meaningful feedback for both end users and service designers.
Koen Decroix, Jorn Lapon, Laurens Lemaire, Bart De Decker, Vincent Naessens

DPIP: A Demand Response Privacy Preserving Interaction Protocol

For Trusted Third Party Involvement
Abstract
We propose a demand response privacy preserving interaction and intermediary protocol design (DPIP) using a trusted third party design to facilitate the collection of Smart Metering data and to perform Demand Response while preserving the privacy of residential customers in the Smart Grid. DPIP relies on intermediaries hiding customer program participation from Demand Response Aggregators (DRA) and thus enhance their anonymity and it provides individualized privacy preserved billing. This is the first of a series of papers. Its focus is the protocol, while following work discusses the role of differential privacy. Keywords: Smart Grid Management, Privacy, Privacy Enhancing Technologies.
Markus Karwe, Günter Müller

Smart Privacy Visor: Bridging the Privacy Gap

Abstract
Due to the propagation of devices with imaging capabilities, the amount of pictures taken in public spaces has risen. Due to this, unintentionally photographed bystanders are often represented in pictures without being aware of it. Social networks and search engines make these images easier accessible due to the available meta-data and the tagging and linking functionality provided by these services. Facial recognition amplifies the privacy implications for the individuals in these pictures. Overall there exist three main classes of wearable picture-related Privacy Enhancing Technologies (PETs). As they need different prerequisites to operate and become effective they have unique time frames in the future where they can be effective even if introduced today. The group of face pattern destroying picture PETs work directly against current face detection algorithms and is the choice for immediate usage. These PETs destroy face patterns and inhibit the detection and automated processing and meta-data enrichment of individuals. This unconditionally visual destructive behavior can be a major obstacle in transition to other PETs. In this paper, we describe how to master a smooth transition between these classes including the restoration of the visual damage some of these methods entail. Furthermore, we propose the Smart Privacy Visor, a PET which combines the previously published Privacy Visor and the Picture Privacy Policy Framework. The overall goal of this transition is to create a PET that avoids identifiable and linkable properties which contradicts the goals of picture PETs in the first place and offer a visually appealing photographic result at the same time.
Adrian Dabrowski, Katharina Krombholz, Edgar R. Weippl, Isao Echizen

De-anonymising Social Network Posts by Linking with Résumé

Abstract
We have developed a system for identifying the person who posted posts of interest. It calculates the similarity between the posts of interest and the résumé of each candidate person and then identifies the résumé with the highest similarity as that of the posting person. Identification accuracy was improved by using the posts of persons other than the target person. Evaluation using 30 student volunteers who permitted the use of their résumés and sets of tweets showed that using information from tweets of other persons dramatically improved identification accuracy. Identification accuracy was 0.36 and 0.53 when the number of other persons was 4 and 9, respectively. Those that the target person can be limited in 10 % of the candidates were 0.72 both with 4 and 9 such employees.
Yohei Ogawa, Eina Hashimoto, Masatsugu Ichino, Isao Echizen, Hiroshi Yoshiura

A Conceptualization of Accountability as a Privacy Principle

Abstract
While accountability is increasingly discussed as a privacy principle, it is far from clear how to achieve privacy protection through accountability. Moreover, it is even unclear how to define accountability in this context. This paper provides a conceptualization of accountability for the context of privacy protection based upon a review of the literature. The presented literature review aims at identifying a minimal core of accountability for the context of privacy protection to provide a foundation for requirements analysis for accountability-centric privacy protection mechanisms.
Christian Zimmermann, Johana Cabinakova

Personal Data as Payment Method in SNS and Users’ Concerning Price Sensitivity - A Survey

Abstract
The guiding question of this paper is whether Social Network Service (SNS) users show price sensitivity regarding the demanded extent of personal data disclosure assuming personal data as payment method within SNSs. In a survey among \(300\) Facebook (FB) users the interviewees were asked for their actual used FB functions. In a quasi-experimental part during the survey they chose out of the same functionalities given the direct trade-off for specific personal data exploitation rights. The results show that the interviewees are sensitive regarding the price. Further, in general they choose fewer functions than they actually stated using in FB, even if the demanded “price” was lower. Moreover, the findings support the theory that users misinterpret SNS data exploitation rights.
Claus-Georg Nolte

DBpedia Community Meeting

Frontmatter

RDFa Live Browser Extension: Faceted Presentation and Tooltip Navigation over Linked Data on the Web

Abstract
Several efforts in research and development of technologies have been spent to publish data in open standard formats. The main project in this regard is the Linking Open Data, which goal is to create an open and semantic Web of Data, enabling processing and understanding the data by software agents. However, not only the machines can take advantage of the explicit semantics of data. People can take advantage from the semantic of the data to explore unknown concepts, new relationships and to obtain personalized access to relevant resources and services. However, it is not trivial for a user without experience with Web of Data, to satisfactorily explore and use these data. This paper presents the RDFa Live Browser Extension, a prototype to support the non-technical users in presentation and navigation over Linked Data on the Web.
Andre Carlomagno Rocha, Cassio V. S. Prazeres

Optimized Processing of Subscriptions to DBpedia Live

Abstract
DBpedia Live enables access to structured data extracted from Wikipedia in real-time. A data stream that is generated from Wikipedia changes is instantly loaded in the DBpedia RDF store. Applications can benefit by subscribing to the RDF update stream and receive continuous results from DBpedia. Providing a continuous update stream of changes to subscribed DBpedia queries is a challenging task due to the load it places on the RDF store.
In this paper, we propose an optimization approach for processing subscriptions to DBpedia Live. By monitoring the change data stream, query processing can be optimized to avoid unnecessary processing load by continuous database polling. Queries are only re-processed when the system can detect a relation between incoming changes and queries so that it can trigger the processing of the specific query. We evaluated our approach by using a recorded history of the DBpedia change stream and as queries we used the most frequent DBpedia SPARQL queries obtained from the logs. A comparison of our approach to the interval-based database polling approach shows a significant optimization of processing costs.
Kia Teymourian, Alexandru Todor, Wojciech Łukasiewicz, Adrian Paschke

Modelling the Quality of Attributes in Wikipedia Infoboxes

Abstract
Quality of data in DBpedia depends on underlying information provided in Wikipedia’s infoboxes. Various language editions can provide different information about given subject with respect to set of attributes and values of these attributes. Our research question is which language editions provide correct values for each attribute so that data fusion can be carried out. Initial experiments proved that quality of attributes is correlated with the overall quality of the Wikipedia article providing them. Wikipedia offers functionality to assign a quality class to an article but unfortunately majority of articles have not been graded by community or grades are not reliable. In this paper we analyse the features and models that can be used to evaluate the quality of articles, providing foundation for the relative quality assessment of infobox’s attributes, with the purpose to improve the quality of DBpedia.
Krzysztof Węcel, Włodzimierz Lewoniewski

DBpedia in the Art Market

Abstract
We investigate a new approach to the art market analysis. A vast amount of structured data about creators on DBpedia can be used for enriching artworks’ sales observations employed in hedonic regression. This approach can result in a new set of explanatory variables and thus yield more accurate art market indices and predictions.
Dominik Filipiak, Agata Filipowska

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise