Skip to main content

2016 | Buch

Information and Software Technologies

22nd International Conference, ICIST 2016, Druskininkai, Lithuania, October 13-15, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 22nd International Conference on Information and Software Technologies, ICIST 2016, held in Druskininkai, Lithuania, in October 2016.
The 61 papers presented were carefully reviewed and selected from 158 submissions. The papers are organized in topical sections on information systems; business intelligence for information and software systems; software engineering; information technology applications.

Inhaltsverzeichnis

Frontmatter

Information Systems: Special Session on Innovative Applications for Knowledge Transfer Support

Frontmatter
Modelling of Adequate Costs of Utilities Services

The paper propose methodology for benchmark modelling of adequate costs of utilities services, which is based on the data analysis of the factual cases (key performance indicators of utilities as the predictors). The proposed methodology was tested by modelling of Latvian water utilities with three tools: (1) a classical version of the multi-layer perceptron with error back-propagation training algorithm was sharpened up with task-specific monotony tests, (2) the fitting of the generalized additive model using the programming language R ensured the opportunity to evaluate the statistical significance and confidence bands of predictors, (3) the sequential iterative nonlinear regression process with minimizing mean squared error provided the notion of the impact of each predictor on the searched regularity. The quality of models is high: the adjusted determination coefficient is greater than 0.75, explained deviance exceeds 0.80, while the correlation between the respective modelled values exceeds even 0.95.

Janis Zuters, Janis Valeinis, Girts Karnitis, Edvins Karnitis
Views to Legal Information Systems and Legal Sublevels

This paper concerns the legal system and legal documentation systems, as well as their interconnectedness and introduces the idea of legal sublevels. Examples of legal sublevels are legal terms, ontologies, annotations, commentaries, etc. A sublevel is treated as a representation level of the legal domain. In terms of software engineering, a sublevel can be defined as a level of infrastructural services for several domains. This paper is a kind of exploratory research; an abstract theory is being developed. A key question is “What are the sublevels in law and legal informatics?” We also examine the concept of view and project the core and peripheral areas around the legal system onto Schweighofer’s 8 views/4 methods/4 syntheses model. We link the idea of sublevel with the notion of view.

Vytautas Čyras, Friedrich Lachmayer, Erich Schweighofer
A Model of the Tacit Knowledge Transfer Support Tool: CKnow-Board

This article elaborates on the development of a dedicated Tacit Knowledge Transfer Support Tool for companies. The five main components of this tool are formulated: (1) Tacit Knowledge Source Identification; (2) Tacit Knowledge Source Determination using the FAHP (Fuzzy Analytic Hierarchy Process); (3) Tacit Knowledge Acquisition, (4) Tacit Knowledge Transformation, (5) A Tacit Knowledge Transfer Board. The proposed tool enables the identification of tacit knowledge in a company and the transformation of different types of knowledge into the form of explicit knowledge included in the Tacit Knowledge Transfer Board.

Justyna Patalas-Maliszewska, Irene Krebs
A Model of an ERP-Based Knowledge Management System for Engineer-to-Order Enterprises

Today’s enterprises require effective systems of knowledge management (KM) that guarantee proper know-how recording and processing in order to create innovative products and technologies. Part of a company’s know-how is recorded in an enterprise resource planning (ERP) system database as data and information which could be transformed into knowledge. In this, paper, a model of a KM system for manufacturing companies that carry out engineer-to-order (ETO) production is proposed. The model is based on ERP systems and includes tools that enable data and information processing and its subsequent conversion into a form of knowledge. An analysis and understanding of the model may result in an improvement of innovation processes in high-tech manufacturing companies. The proposed model may additionally support the research and design activities of manufacturing enterprises. Illustrative examples are provided.

Sławomir Kłos
Cloud-Based Enterprise Information Systems: Determinants of Adoption in the Context of Organizations

Cloud computing is growing at a very fast pace. Enterprise information systems (EISs) such as ERP, SCM, and CRM are used in organizations in order to increase customer satisfaction, operational excellence, and to decrease operational costs. Looking at the widespread literature available on both EIS and Cloud Computing, few researchers have examined the integration of both systems. While this area has not been fully investigated in the academia due to limited available literature, it has attracted significant interest from general practitioners. Accordingly, the Cloud-EIS can be considered as an important research problem. In this study, we attempt to investigate the factors influencing the usage and adoption of Cloud-EISs by considering Technology-Organization-Environment (TOE) framework as the basis to give directions to cloud service providers on how to design their products in order to increase adoption and usage. Analytic Hierarchy Process (AHP) is used in order to rank the determined factors. The results show that the most significant factors influencing the usage and adoption of Cloud-EISs are security & privacy, business activity- cloud EIS fitness, top management support, trust, and organization’s IT resource.

Umut Şener, Ebru Gökalp, P. Erhan Eren
Semi-supervised Learning Approach for Ontology Mapping Problem

The evolution of the Semantic Web depends on the growing number of ontologies it comprises. However, all ontologies have differences in structure and content because there is no unified standard for their design. To ensure interoperability and fluent information exchange, the correspondences between entities of different ontologies must be found and mapped. A lot of methods have already been proposed for matching heterogeneous ontologies, but they still have many shortcomings and require improvements. This paper suggests a novel semi-supervised machine learning method, which solves ontology mapping task as a classification problem with training set, comprised only of labeled positive examples. Negative examples are generated artificially using an entropy measure in order to build a more accurate Naive Bayesian classifier.

Rima Linaburgyte, Rimantas Butleris
Rule-Based Canonicalization of Arbitrary Tables in Spreadsheets

Arbitrary tables presented in spreadsheets can be an important data source in business intelligence. However, many of them have complex layouts that hinder the process of extracting, transforming, and loading their data in a database. The paper is devoted to the issues of rule-based data transformation from arbitrary tables presented in spreadsheets to a structured canonical form that can be loaded into a database by regular ETL-tools. We propose a system for canonicalization of arbitrary tables presented in spreadsheets as an implementation of our methodology for rule-based table analysis and interpretation. It enables the execution of rules expressed in our specialized rule language called CRL to recover implicit relationships in a table. Our experimental results show that particular CRL-programs can be developed for different sets of tables with similar features to automate table canonicalization with high accuracy.

Alexey O. Shigarov, Viacheslav V. Paramonov, Polina V. Belykh, Alexander I. Bondarev

Information Systems: Special Session on e-Health Information Systems

Frontmatter
Algorithm for Colorblindness Detection Sets Generation

Currently there are numerous methods for detecting colorblindness. They differ in many aspects, e.g. the quality and accuracy of diagnosis, the spectrum of colors used during test or the ability of being reproduced on a mass scale. However, there is one thing that those methods do have in common – they are based on predefined, limited sets of color elements.This paper presents the algorithm for generation of colorblindness detection sets – so called quodlibets – pairs of colors that are easily distinguishable by a person with normal color vision, but are hard (or impossible) to tell apart by a person with certain color vision disorder within defined timeframe. The discussed algorithm allow to generate many different quodlibets, which allow not only to detect certain types of colorblindness but also to estimate the strength of a color vision impairment.

Maciej Laskowski
Analysis of Selected Elements of a Rower’s Posture Using Motion Capture – a Case Study

The paper presents research on the motion capture of rowers practising on the Concept II Indoor Rower ergometer, using a motion capture device produced by the Vicon company, as well as a wireless heart rate monitor. The aim of the article is to analyse and assess the extent of variation of the angular position of the rower’s back at different phases of rowing and in different degrees of fatigue. The back’s position during rowing was recorded using three markers: one being a component of the Plug-in Gait model (T10) and two additional ones (marked as S1 and S2). The results obtained with the motion capture system allow to clearly identify the degree of training of the subjects studied, their fatigue and rowing technique on the basis of the analysis of changes in the inclination of the back in different positions of the successive phases of rowing.

Jerzy Montusiewicz, Jakub Smolka, Maria Skublewska-Paszkowska, Edyta Lukasik, Katarzyna R. Baran, Izabela Pszczola-Pasierbiewicz
Analysis of Pulmonary and Hemodynamic Parameters for the Weaning Process by Using Fuzzy Logic Based Technology

This study aims to achieve accurate weaning process which is very important for patient under mechanical ventilation. The developed weaning algorithm was designed on LabVIEW software and fuzzy system designer part of LabVIEW software was used during to develop that algorithm. The developed weaning algorithm has six input parameters which are maximum inspiratory pressure (MIP), tidal volume on spontaneous breathing (TVS), minute ventilation (VE), heart rate, respiration per minute (RPM) and body temperature. 20 clinical scenarios were generated by using Monte-Carlo simulations and Gaussian distribution method to evaluate performance of the developed algorithm. An expert clinician was involved to this process to evaluate the each generated clinic scenario to determine weaning probability for the each patient. Student t-test for p < 0.05 was used to show statistical difference between the developed algorithm and clinician’s evaluation. According to student t-test, there is no statistical difference for 98.2 % probability between the developed algorithm and the clinician’s evaluation of weaning probability.

Ugur Kilic, Mehmet Tahir Gulluoglu, Hasan Guler, Turgay Kaya
Improving Business Processes in HIS by Adding Dynamicity

Dynamic patient treatment (PT) processes are usually human-centric and often unique. Therefore, implemented PT processes into information system (IS) should be capable to adopt dynamically according to the real PT process changes at process instance runtime. Such processes are known as dynamic business processes (DBP). The problem of DBP modelling, simulation and execution has been actively investigated over the last few years, but existing approaches to DBP modelling, simulation and execution have been incomplete, i.e. they lack theory and/or a case study or both. Moreover, a question of implementation of the DBP modelling, execution and simulation is left without an answer. This paper presents a DBP modelling and simulation approach, which is realised in Hospital Santariskiu Klinikos and a PT process case study is conducted. The results obtained shows that the proposed approach and its implementation are suitable for DBP modelling and simulation in HIS (Health Information System).

Diana Kalibatiene, Justas Trinkunas, Toma Rusinaite, Olegas Vasilecas

Information Systems: Special Session on Information and Software Technologies for Intelligent Power Systems

Frontmatter
On Defining and Assessing of the Energy Balance and Operational Logic Within Hybrid Renewable Energy Systems

The paper is considering a problem of providing energy in small neighbourhoods consisting of the households and local industrial or commercial units (e.g. farms, workshops, offices). Such communities may benefit from the use of the Hybrid Renewable Energy Systems (HRES) harnessing solar and wind power, accumulating excess energy using Power Storage Banks and integrating to the External Power Grid (EPG). Phases of planning, design, installation and operation of HRESs are considered as interconnected complex tasks needing appropriate Requirement Analysis. Architecture and energy balance of the HRES are introduced where important role is devoted to the Energy Gateway Station. Characteristics of the exploiting of solar and wind potential energy and customer usage patterns in such communities are considered. Types of requests for energy use or its supply to EPG as well as principles of the operational logic of the HRES are introduced. Criteria for assessing of the effectiveness of HRES are briefly discussed.

Algirdas Pakštas, Olha Shulyma, Vira Shendryk

Business Intelligence for Information and Software Systems: Special Session on Intelligent Methods for Data Analysis and Computer Aided Software Engineering

Frontmatter
Predicting User Identity and Personality Traits from Mobile Sensor Data

Several types of information can be revealed from data provided by mobile sensors. In this study touchscreen and accelerometer data was collected from a group of 98 volunteers during filling in the Eysenck Personality Questionnaire on a tablet computer. Subjects performed swipes on the touchscreen in order to answer the questions. Touchscreen swipes have been already used for user authentication. We show that our constrained swipes contain enough user specific information to be utilized for the same task. Moreover, we have studied the predictability of personality traits such as extraversion, and neuroticism from the collected data. Extraversion was found to be the most reliably predictable personality trait.

Margit Antal, László Zsolt Szabó, Győző Nemes
Towards a Comprehensive Formal Model for Business Processes

In order to enable enterprises to operate more effectively and efficiently, providing a high reliability of the specification of business processes is an active research subject. However, the proposed approaches use limited models. WS-BPEL (or BPEL for short) is the most well known and used orchestration language describing the Web services composition in form of business process. However, BPEL lacks a formal semantics. This paper introduces High-Level Time Open Workflow Nets with Action Duration model (HL-DToN) which is an extension of Time Petri Nets with Action Duration (DTPN) for tackling different aspects of business processes as far as possible. We use this model either to specify WS-BPEL processes or to directly specify business processes.

Khalil Mecheraoui, Nabil Belala, Djamel Eddine Saïdouni
Adaptive Control of the Metalworking Technology Systems Operation Based on the Forecast of the Actual Resource of the Cutting Tool

The article deals with the adaptive control algorithm of the metalworking industrial process systems operation, based on the comparison of the machining time of the part required by the process and the forecast of the actual resource of the tool. The forecast is carried out continuously during the entire period of the machining of the part by means of the specially developed for this software. The program implements the method of the parametric identification of the mathematical model of the sound trend, accompanying the metalworking process.The sound is recorded via a microphone, which is installed near the cutting area. Incommensurability of the distance from the microphone to the source of the useful signal of cutting to the noise disturbance of the surrounding equipment operation, greatly reduce the distorting influence of this noise disturbance on the forecasting results. Contactless sound recording method allows also to avoid the drawbacks, the contact techniques of the vibrations measurements or the acoustic emission, traditionally used in the adaptive control.The considered in the article adaptive control algorithm allows to use exhaustively the cutting properties of the tool and the operational capabilities of the machine, while ensuring the required quality of the manufacturing of a part.

Volodymyr Nahornyi, Olga Aleksenko, Natalia Fedotova
Dynamic Business Process Simulation - A Rule- and Context- Based Approach

Business processes are dynamic by their nature, because of changes of their environment and customers’ needs. Therefore, their supporting information systems should be dynamic as well. Although a number of approaches have been proposed to dynamic business process (DBP) modelling, there is lack of theory, implementation or both. This paper proposes a novel rule- and context- based approach of DBP modelling and simulation. The main idea of the approach is that the sequence of activities in a process depends on the environment, which is described as a context of a BP, and activities are selected according to the defined rules at BP instance runtime. The effectiveness of the proposed approach is evaluated with a simulation scenario, and the simulation results indicate that the proposed approach is suitable for DBP simulation.

Diana Kalibatiene, Olegas Vasilecas
Research in High Frequency Trading and Pairs Selection Algorithm with Baltic Region Stocks

Pair trading is a popular strategy where a profit arises from pricing inefficiencies between stocks. The idea is simple: find two stocks that move together and take long/short positions when they diverge abnormally, hoping that the prices will converge in the future. During last few years high frequency trading in milliseconds or nanoseconds has drawn attention of not only to financial players but and to researchers and engineers. The main objective of this research is to check three different statistical arbitrage strategies using high frequency trading with 14 OMX Baltic market stocks and measure their efficiency and risks. One strategy used in this paper was first implemented by M.S Perlini, the other one by J.F. Caldeira and G.V. Moura and the last one was presented by D. Herlemont. Together with the strategies a pair selecting algorithm was presented. All three strategies were modified in order to be able to work with high frequency data. At the end of the research strategies where measured accordingly to the profit they did generate.

Mantas Vaitonis, Saulius Masteika
Shared Resource Model for Allocating Resources to Activities in BP Simulation

Resources shareability amongst simultaneous business process (BP) activities is one of the main issues in the BP simulation. Without realistic simulation of shared resources usage, it is difficult to obtain accurate BP performance measures. This paper presents an approach to model shareable resources and a set of rules to handle resources allocation and control their usage in the rule-based BP simulation. The presented model and set of constraints are formalized using the UML/OCL allowing validation of the proof of concept. The approach has been implemented in a prototype of BP simulation tool, and an example highlighting benefits of applying the approach in the BP simulation is presented.

Olegas Vasilecas, Kestutis Normantas, Toma Rusinaite, Titas Savickas, Tadas Vysockis
Parallel Computing for Dimensionality Reduction

Big data analytics enables to uncover hidden and useful information for better decisions. Our research area covers big data visualization that is based on dimensionality reduction methods. It requires time and resource consuming processes, so in this paper we look for computing methods and environments that enable to execute the tasks and get results faster. In this research we use Random projection method to reduce the dimensions of the initial data. We investigate how parallel computing based on OpenMP and MPI technologies can increase the performance of these dimensionality reduction processes. The results show the significant improvement of performance when executing MPI code on computer cluster. However, the greater number of cores not always leads to higher speed.

Jelena Zubova, Marius Liutvinavicius, Olga Kurasova
Lessons Learned from Tool Integration with OSLC

Today’s embedded and cyber-physical systems are getting more connected and complex. One main challenge during development is the often loose coupling between engineering tools, which could lead to inconsistencies and errors due to the manual transfer and duplication of data. Open formats and specifications raise expectations for seamlessly integrated tool chains for systems engineering combining best-of-breed technologies and tools of different tool vendors.The ARTEMIS JU project CRYSTAL aims for a harmonized interoperability specification (IOS) incorporating various open specifications and standards such as OSLC (Open Services for Lifecycle Collaboration), ReqIF (Requirements Interchange Format) or FMI (Functional Mockup Interface) for supporting seamless model-based systems engineering.This paper focuses on lifecycle integration using OSLC. We will report challenges we experienced in the implementation of an automotive and healthcare use case. The paper should support others in deciding if OSLC is an appropriate technology and to overcome common challenges in the implementation of OSLC adapters.

Andrea Leitner, Beate Herbst, Roland Mathijssen
A Model-Driven Approach to Adapt SysML Blocks

Reusing and adapting existing components is the central topic of component-based development. The major differences between the existing approaches concern the models used to represent the components and the detail given to generate the adapters. In this paper, we present our approach which bases on the hierarchy to generate the adapters. Our components are modelled using SysML blocks and their interaction protocols are modelled using SysML Sequence Diagrams (SDs). We have used coloured Petri nets as formal model to define the adaptation rules, and we have based on meta-modelling and model transformation to implement these rules. We illustrate our approach through a case study.

Hamida Bouaziz, Samir Chouali, Ahmed Hammad, Hassan Mountassir
BPMN Decision Footprint: Towards Decision Harmony Along BI Process

Nowadays, one of the companies challenges is to benefit from their Business Intelligence (BI) projects and not to see huge investments ruined. To address problems related to the modelling of these projects and the management of their life-cycle, Enterprise Architecture (EA) Frameworks are considered as an attractive alternative to strengthen the Business-IT alignment. Business Process Model and Notation (BPMN) represents a pillar of these Frameworks to minimize the gap between the expectations of managers and delivered technical solutions. The importance of decision-making in business process has led the Object Management Group (OMG) to announce its new standard: Decision Model and Notation (DMN). In this paper, we propose the BPMN Decision Footprint (BPMNDF), which is a coupling of a BPMN with a novel DMN version. This enhancement has an additional component as a repository of all decisions along the process, used in order to ensure the harmony of decision-making.

Riadh Ghlala, Zahra Kodia Aouina, Lamjed Ben Said
A Cost-Aware and Workload-Based Index Advisor for Columnar In-Memory Databases

Optimal index configurations for in-memory databases differ significantly from configurations for their traditional disk-based counterparts. Operations like full column scans that have previously been prohibitively expensive in disk-based and row-oriented databases are now computationally feasible with columnar main memory-resident data structures and even outperform index-based accesses in many cases. Furthermore, index selection criteria are different for in-memory databases since maintenance costs are often lower while memory footprint considerations have become increasingly important.In this paper, we introduce a workload-based and cost-aware index advisor tailored for columnar in-memory databases in mixed workload environments. We apply a memory traffic-driven model to estimate the efficiency of each index and to give a system-wide overview of the indices that are cost-ineffective with respect to their size and performance improvement. We also present our Index Advisor Cockpit applied to a real-world live production enterprise system of a Global 2000 company.

Martin Boissier, Timo Djürken, Rainer Schlosser, Martin Faust
S4J - Integrating SQL into Java at Compiler-Level

Object-oriented languages and relational database systems dominate the design of modern enterprise information systems. However, their interoperability has caused problems ever since. In this paper, we present an approach to integrate SQL into the Java programming language. The integration is done at compiler level, making SQL a first-class citizen of the programming language, including object-awareness, providing validation possibilities that cover e.g. query correctness and type compatibility. In contrast to existing solutions, these validations are carried out during compilation. To evaluate our approach, we implemented a standard business process (Order-To-Pay) using Hibernate, JDBC, and S4J. We compare each implementation in terms of query performance, code size, and code complexity. The evaluation shows that the integration of SQL into Java allows to reduce code size and complexity while maintaining an equal or better performance compared to competitive approaches.

Keven Richly, Martin Lorenz, Sebastian Oergel

Software Engineering: Special Session on Intelligent Systems and Software Engineering Advances

Frontmatter
A New Estimator of the Mahalanobis Distance and its Application to Classification Error Rate Estimation

A well known category of classification error rate estimators is so called parametric error rate estimators. These estimators are typically expressed as functions of the training sample size, the dimensionality of the observation vector and the Mahalanobis distance between the classes. However, all parametric classification error rate estimators are biased and the main source of this bias is the estimate of the Mahalanobis distance. In this paper we propose a new Mahalanobis distance estimation method that is designed for use in parametric classification error rate estimators. Experiments with real world and synthetic data sets show that new estimator helps to reduce the bias of the most common parametric classification error rate estimators. Additionally, non-parametric classification error rate estimators, such as resubstitution, repeated 10-fold cross-validation and leave-one-out are outperformed (in terms of root-mean-square error) by parametric estimators that use new estimates of the Mahalanobis distance.

Mindaugas Gvardinskas
A Bag-of-Features Algorithm for Applications Using a NoSQL Database

In this paper we present a Bag-of-Words (also known as a Bag-of-Features) method developed for the use of its implementation in NoSQL databases. When working with this algorithm special attention was brought to facilitating its implementation and reducing the number of computations to a minimum so as to use what the database engine has to offer to its maximum. The algorithm is presented using an example of image storing and retrieving. In this case it proves necessary to use an additional step of preprocessing, during which image characteristic features are retrieved and to use a clustering algorithm in order to create a dictionary. We present our own k-means algorithm which automatically selects the number of clusters. This algorithm does not comprise any computationally complicated classification algorithms, but it uses the majority vote method. This makes it possible to significantly simplify computations and use the Javascript language used in a common NoSQL database.

Marcin Gabryel
Novel Recursive Fast Sort Algorithm

Sorting algorithms are important procedures to facilitate the order of data. In this paper, author describes new recursive version of fast sort algorithm for large data sets. Examination of the recursive fast sort algorithm performance was subject to performance tests, that showed validity. It is discussed, if non recursive version is faster than recursive.

Zbigniew Marszałek
A Modified Electromagnetic-Like Mechanism for Rough Set Attribute Reduction

Reducing redundant attributes is the important issue in classification of data and knowledge discovery. This paper investigates a modified and adapted continuous optimization algorithm to solve a discrete optimization problem. To achieve this aim, a modified electromagnetic-like mechanism (MEM) is adapted to find the minimal attribute based on rough set for the first time. The procedure of MEM works based on the attraction-repulsion mechanism of electromagnetic theory, it memorizes and utilizes histories of the charges and the locations of points and the procedure also is able to escape from local optimal solutions. The MEM is adapted by a new discretization function and tested on well-known UCI datasets. Its experimental results show that proposed algorithm is able to find acceptable results when compared with the general draft of EM, GA and PSO algorithms.

Majid Abdolrazzagh-Nezhad, Shaghayegh Izadpanah
Application of Real Ant Colony Optimization Algorithm to Solve Space Fractional Heat Conduction Inverse Problem

In this paper inverse problem for the space fractional heat conduction equation is investigated. In order to reconstruct the heat transfer coefficient, functional defining error of approximate solution is created. To minimize this functional the Real Ant Colony Optimization algorithm is used. The paper presents examples to illustrate the accuracy and stability of the presented algorithm.

Rafał Brociek, Damian Słota
A New Classifier Based on the Dual Indiscernibility Matrix

A new approach to classifier synthesis was proposed by Polkowski and in this work we propose an implementation of this idea. The idea is based on usage of a dual indiscernibility matrix which allows to determine for each test object in the data, pairs of training objects which cover in a sense the given test object. A family of pairs best covering the given object pass their decisions for majority voting on decision for the test object. We present results obtained by our classifier on standard data from UCI Repository and compare them with results obtained by means of k-NN and Bayes classifiers. The results are validated by multiple cross-validation. We find our classifier on par with k-NN and Bayes classifiers.In this work Sect. 1, Introduction, gives basic definitions of the notions applied and proposed method, Sect. 2 brings forth results of experiments with real data from UCI Repository. The last Sect. 3 is devoted to a discussion of results and concluding remarks.

Piotr Artiemjew, Bartosz A. Nowak, Lech T. Polkowski
Introduction to the Model of the Active Assistance System for Elder and Disabled People

In this article we present assumptions for development of novel model of active system that can assist elder and disabled people. In the following sections we discuss literature and propose a structure of decision support and data processing on levels: voice and speech processing, image processing based on proposed descriptors, routing and positioning. For these aspects pros and cons that can be faced in the development process are described with potential preventive actions.

Dawid Połap, Marcin Woźniak
Novel Image Correction Method Based on Swarm Intelligence Approach

In the article an approach toward novel method for image features correction is proposed. For the input image developed swarm intelligence technique is applied to improve brightness, contrast, sharpen presentation and improve gamma correction. The following sections present proposed model of the correction techniques with applied swarm intelligence approach. Experimental results on a set of test images are presented with a discussion of achieved improvements.

Marcin Woźniak
A Comparison of Mining Incomplete and Inconsistent Data

We present experimental results on a comparison of incompleteness and inconsistency. Our experiments were conducted on 141 data sets, including 71 incomplete data and 62 inconsistent, created from eight original numerical data sets. We used the Modified Learning from Examples Module version 2 (MLEM2) rule induction algorithm for data mining. Among eight types of data sets combined with three kinds of probabilistic approximations used in experiments, in 12 out of 24 combinations the error rate, computed as a result of ten-fold cross validation, was smaller for inconsistent data (two-tailed test, 5 % significance level). For one data set, combined with all three probabilistic approximations, the error rate was smaller for incomplete data. For remaining nine combinations the difference in performance was statistically insignificant. Thus, we may claim that there is some experimental evidence that incompleteness is generally worse than inconsistency for data mining.

Patrick G. Clark, Cheng Gao, Jerzy W. Grzymala-Busse
Transient Solution for Queue-Size Distribution in a Certain Finite-Buffer Model with Server Working Vacations

A finite-buffer queueing model with Poisson arrivals and exponential processing times is investigated. Every time when the system empties, the server begins a generally distributed single working vacation period, during which the service is provided with another (slower) rate. After the completion of the vacation period the processing is being continued normally, with original speed. The next working vacation period is being initialized at the next time at which the system becomes empty, and so on. The system of Volterra-type integral equations for transient queue-size distribution, conditioned by the initial level of buffer saturation, is built. The solution of the corresponding system written for Laplace transforms is given in a compact-form using the linear algebraic approach and the corresponding result obtained for the ordinary model (without working vacation regime). Numerical examples are attached as well.

Wojciech M. Kempa, Martyna Kobielnik
Importance of Neighborhood Parameters During Clustering of Bathymetric Data Using Neural Network

The main component, which has a significant impact on safety of navigation, is the information about depth of a water area. The commonly used solution for depths measurement is usage the echosounders. One of the problems associated with bathymetric measurements is recording a large number of data. The fundamental objective of the author’s research is the implementation of a new reduction method for geodata to be used for the creation of bathymetric map. The main purpose of new reduction algorithm is that, the position of point and the depth value at this point will not be an interpolated value. In the article, author focused on importance of neighborhood parameters during clustering of bathymetric data using neural network (self-organizing map) – it is the first stage of the new method. During the use of Kohonen’s algorithm, the author focused on two parameters: topology and initial neighborhood size. During the test, several populations were created with number of clusters equal 25 for data collected from the area of 625 square meters (dataset contains of 28911 XYZ points). In the next step, statistics were calculated and results were presented in two forms: tabular form and as spatial visualization. The final step was their comprehensive analysis.

Marta Wlodarczyk-Sielicka
A Cloud Oriented Support for Motion Detection in Video Surveillance Systems Using Computational Intelligence

This paper proposes a cloud-oriented architecture for video analysis and motion detection. The core algorithm as been based on a typical computational intelligence method called Firefly Algorithm jointly with a Sobel filter in order to reduce the analysis complexity and the required computational effort. The developed system is completely self sufficient and highly scalable and expandable on demand. To achieve this result the developed architecture has beed accurately engineered by means of design patterns and structured as a layered application. Therefore the developed computational core is able to manage high level interfaces for the cloud environment as well as to take advantage of hardware level optimizations in order to maximize its performance and make it suitable for real time analysis of continuous video streams coming from multiple sources.

Christian Napoli, Emiliano Tramontana
Study on Transient Queueing Delay in a Single-Channel Queueing Model with Setup and Closedown Times

A single-channel queueing model with finite buffer capacity, Poisson arrival stream and generally distributed processing times is considered. After each busy period the service station is being switched off but this operation requires a randomly distributed closedown time. Similarly, after the idle time, the first service in a new busy period is preceded by a random setup time, during which the processing is still suspended and the server achieves full readiness for the service process. A system of integral equations for transient conditional queueing delay distribution is derived, by using the idea of embedded Markov chain and the formula of total probability. The solution of the corresponding system written for Laplace transforms is obtained via the linear algebraic technique. Numerical examples are attached as well.

Wojciech M. Kempa, Iwona Paprocka, Krzysztof Kalinowski, Cezary Grabowik, Damian Krenczyk

Information Technology Applications: Special Session on Smart e-Learning Technologies and Applications

Frontmatter
On Suitability Index to Create Optimal Personalised Learning Packages

The paper aims to present a novel probabilistic method to creating personalised learning packages. The method is based on learning components’ suitability to students needs according to their learning styles. In the paper, the authors use Felder-Silverman Learning Styles Model and an example of Inquiry Based Learning (IBL) method. Expert evaluation method based on trapezoidal fuzzy numbers is applied in the research to obtain numerical values of suitability of learning styles and learning activities. Personalised learning packages should consist of learning components (learning objects, learning activities and learning environments) that are optimal (i.e. the most suitable) to particular students according to their learning styles. “Optimal” means “having the highest suitability index”. Original probabilistic method is applied to establish not only students’ learning styles but also probabilistic suitability of learning activities to students’ learning styles. An example of personalised learning package using IBL activities is presented in more detail.

Eugenijus Kurilovas, Julija Kurilova, Tomas Andruskevic
Technological Challenges for Learning Objects in MOOCs Design

The paper presents discussion on the technological challenges for learning objects (LO) and massive open online courses (MOOCs) design. Research results will suggest to educators the model for modernization or design new educational content by using ICT tools and environments. New approach will be suggested for different LO design online in the learning objects repository, which one is developed for LO exchange, sharing and assuring reusability function of the educational content.

Daina Gudonienė, Rūta Dapkūnaitė, Danguolė Rutkauskienė, Vytautas Štuikys, Svitlana Kalashnikova
Affective Engagement to Virtual and Live Lectures

Affective engagement to university lectures needs external stimulation. This paper presents an empirical study on student’s engagement change to live and virtual lectures with complex, picture, video and human body movement stimuli. Each experiment lasted 30 min and was divided into 5 periods (10-5-5-5-5 min each). At the end of each period different stimuli were exposed: human interrupted the lecture; instructor presented slides; video materials; and intensive body movements. Stimuli and study materials for the live and virtual avatar- based lectures were developed following the same experiment lecture model. Avatar was created and animated with CrazyTalk software. Experimenting group consisted of 10 students, age 20–24, 4 females and 6 males. Changes of attention to lecture materials were measured using triple indicators: affective regulation monitored during all the lecture period with Muse portable headband device; cognitive self-regulation was measured before the lecture using questionnaire technique; behavioral regulation was observed using video recording through the entire lecture period. The highest concentration, attentiveness and active engagement was observed during the avatar-based lecture and complex human stimuli, while in live lecture all the stimuli activated approximately the same response. Image stimuli activated different reactions: in a live lecture it slightly tweaked student attentiveness, while in avatar-based lecture attentiveness was lowered. Reactions to video stimuli in both experimental groups were opposite as for image stimuli. These research results can prompt instructors how to construct training materials and implement additional stimuli grabbing student’s attention. We recommend mixing video and live lectures and using stimuli evoking and strengthening active engagements.

Judita Kasperiuniene, Meet Jariwala, Egidijus Vaskevicius, Saulius Satkauskas

Information Technology Applications: Special Session on Language Technologies

Frontmatter
Topic Classification Problem Solving for Morphologically Complex Languages

In this paper we are presenting a topic classification task for the morphologically complex Lithuanian and Russian languages, using popular supervised machine learning techniques. In our research we experimentally investigated two text classification methods and a big variety of feature types covering different levels of abstraction: character, lexical, and morpho-syntactic. In order to have comparable results for the both languages, we kept experimental conditions as similar as possible: the datasets were composed of the normative texts, taken from the news portals; contained similar topics; and had the same number of texts in each topic.The best results (~0.86 of the accuracy) were achieved with the Support Vector Machine method and the token lemmas as a feature representation type. The character feature type capturing relevant patterns of the complex inflectional morphology without any external morphological tools was the second best. Since these findings hold for the both Lithuanian and Russian languages, we assume, they should hold for the entire group of the Baltic and Slavic languages.

Jurgita Kapočiūtė-Dzikienė, Tomas Krilavičius
Vector Space Representations of Documents in Classifying Finnish Social Media Texts

Computational analysis of linguistic data requires that texts are transformed into numeric representations. The aim of this research is to evaluate different methods for building vector representations of text documents from social media. The methods are compared in respect to their performance in a classification task. Namely, traditional count-based term frequency-inverse document frequency (TFIDF) is compared to the semantic distributed word embedding representations. Unlike previous research, we investigate document representations in the context of morphologically rich Finnish. Based on the results, we suggest a framework for building vector space representations of texts in social media, applicable to language technologies for morphologically rich languages. In the current study, lemmatization of tokens increased classification accuracy, while lexical filtering generally hindered performance. Finally, we report that distributed embeddings and TFIDF perform at comparable levels with our data.

Viljami Venekoski, Samir Puuska, Jouko Vankka
Text Type Differentiation Based on the Structural Properties of Language Networks

In this paper co-occurrence language network measures from literature and legal texts are compared on the global and on the local scale. Our dataset consists of four legal texts and four short novellas both written in English. For each text we construct one directed and weighted network, where weight of a link between two nodes represents overall co-occurrence frequencies of the corresponding words. We choose four literature-law pairs of texts with approximately the same number of different words for comparison. The aim of this experiment was to investigate how complex network measures operate in different structures of texts and which of them are sensitive to different text types. Our results show that on the global scale only average strength is the measure that exhibit some uniform behaviour due to the differences in textual complexity. In general, global measures may not be well suited to discriminate between mentioned genres of texts. However, local perspective rank plots of in and out selectivity (average node strength) indicate that there are more noticeable structural differences between legal texts and literature.

Sanda Martinčić-Ipšić, Tanja Miličić, Ana Meštrović
Running Out of Words: How Similar User Stories Can Help to Elaborate Individual Natural Language Requirement Descriptions

While requirements focus on how the user interacts with the system, user stories concentrate on the purpose of software features. But in practice, functional requirements are also described in user stories. For this reason, requirements clarification is needed, especially when they are written in natural language and do not stick to any templates (e.g., “as an X, I want Y so that Z ...”). However, there is a lot of implicit knowledge that is not expressed in words. As a result, natural language requirements descriptions may suffer from incompleteness. Existing approaches try to formalize natural language or focus only on entirely missing and not on deficient requirements. In this paper, we therefore present an approach to detect knowledge gaps in user-generated software requirements for interactive requirement clarification: We provide tailored suggestions to the users in order to get more precise descriptions. For this purpose, we identify not fully instantiated predicate argument structures in requirements written in natural language and use context information to realize what was meant by the user.

Frederik S. Bäumer, Michaela Geierhos
Link Prediction on Tweets’ Content

In this paper we test various weighted local similarity network measures for predicting the future content of tweets. Our aim is to determine the most suitable measure for predicting new content in tweets and subsequently explore the spreading positively and negatively oriented content on Twitter. The tweets in the English language were collected via the Twitter API depending on their content. That is, we searched for the tweets containing specific predefined keywords from different domains - positive or negative. From the gathered tweets the weighted complex network of words is formed, where nodes represent words and a link between two nodes exists if these two words co-occur in the same tweet, while the weight denotes the co-occurrence frequency. For the link prediction task we study five local similarity network measures commonly used in unweighted networks (Common Neighbors, Jaccard Coefficient, Preferential Attachment, Adamic Adar and Resource Allocation Index) which we have adapted to weighted networks. Finally, we evaluated all the modified measures in terms of the precision of predicted links. The obtained results suggest that the Weighted Resource Allocation Index has the best potential for the prediction of content in tweets.

Sanda Martinčić-Ipšić, Edvin Močibob, Ana Meštrović
Polyphon: An Algorithm for Phonetic String Matching in Russian Language

Data cleansing is the crucial matter in business intelligence. We propose a new phonetic algorithm to string matching in Russian language without transliteration from Cyrillic to Latin characters. It is based on the rules of sounds formation in Russian language. Additionally, we consider an extended algorithm for matching of Cyrillic strings where phonetic code letters are presented as primes, and the code of a string is the sum of these numbers. Experimental results show that our algorithms allow accurately matching phonetically similar strings in Russian language.

Viacheslav V. Paramonov, Alexey O. Shigarov, Gennagy M. Ruzhnikov, Polina V. Belykh
Automatic Detection of Nominal Events in Hungarian Texts with Dependency Parsing and WordNet

Besides named entity recognition, the detection of events in natural language is an important area of Information Extraction. The detection and analysis of events in natural language texts play an important role in several NLP applications such as summarization and question answering. Most events are denoted by verbs in texts and the verbs usually denote events. But other parts of speech (e.g. noun, participle) can also denote events. In our work we deal with the detection of nominal events. In this study we introduce a machine learning-based approach with a rich feature set that can automatically detect nominal events in Hungarian texts based on dependency parsing and WordNet. Additional methods were also applied beside the features, which improved the results and decreased running time. To our best knowledge, ours is the first result for detecting nominal events in Hungarian natural language texts, with dependency parsing and WordNet. Having evaluated them on test databases, our algorithms achieve competitive results as compared to the current English and other language results.

Zoltán Subecz
Extracting Location Names from Unstructured Italian Texts Using Grammar Rules and MapReduce

Named entity recognition aims at locating elements in a given text and classifying them according to pre-defined categories, such as the names of persons, organisations, locations, quantities, etc. This paper proposes an approach to recognise the location names by extracting them from unstructured Italian language texts. We put forward the use of the framework MapReduce for this task, since it is more robust than a classical analysis when data are unknown and assists at parallelising processing, which is essential for a large amount of data.

Christian Napoli, Emiliano Tramontana, Gabriella Verga
“Google” Lithuanian Speech Recognition Efficiency Evaluation Research

This paper presents “Google” Lithuanian speech recognition efficiency evaluation research. For the experiment it was chosen method that consists of three parts: (1) to process all voice records without adding any noise; (2) process all voice records with several different types of noise, modified so as to get some predefined signal-to-noise ratio (SNR); (3) after one month reprocess all voice records without any additional noise and to assess improvements in the quality of the speech recognition. It was chosen WER metrics for speech recognition quality assessment. Analyzing the results of the experiment it was observed that the greatest impact on the quality of speech recognition has a SNR and speech type (most recognizable is isolated words, the worst - spontaneous speech). Meanwhile, characteristics such as the gender of the speaker, smooth speech, speech speed, speech volume does not make any significant influence on speech recognition quality.

Donatas Sipavičius, Rytis Maskeliūnas
Quality and Importance of Wikipedia Articles in Different Languages

This article aims to analyse the importance of the Wikipedia articles in different languages (English, French, Russian, Polish) and the impact of the importance on the quality of articles. Based on the analysis of literature and our own experience we collected measures related to articles, specifying various aspects of quality that will be used to build the models of articles’ importance. For each language version, the influential parameters are selected that may allow automatic assessment of the validity of the article. Links between articles in different languages offer opportunities in terms of comparison and verification of the quality of information provided by various Wikipedia communities. Therefore, the model can be used not only for a relative assessment of the content of the whole article, but also for a relative assessment of the quality of data contained in their structural parts, the so-called infoboxes.

Włodzimierz Lewoniewski, Krzysztof Węcel, Witold Abramowicz
The Logical-Linguistic Model of Fact Extraction from English Texts

In this paper we suggest the logical-linguistic model that allows extracting required facts from English sentences. We consider the fact in the form of a triplet: Subject > Predicate > Object with the Predicate representing relations and the Object and Subject pointing out two entities. The logical-linguistic model is based on the use of the grammatical and semantic features of words in English sentences. Basic mathematical characteristic of our model is logical-algebraic equations of the finite predicates algebra. The model was successfully implemented in the system that extracts and identifies some facts from Web-content of a semi-structured and non-structured English text.

Nina Feliksivna Khairova, Svetlana Petrasova, Ajit Pratap Singh Gautam

Information Technology Applications: Special Session on Internet-of-Things in Mobility Applications

Frontmatter
Analysing of the Voice Communication Channels for Ground Segment of Air Traffic Management System Based on Embedded Cloud Technology

Air Traffic Management (ATM) systems represent essential infrastructure that is critical for flight safety. Communication is a key element in the present ATM system. Communication between air traffic controllers and pilots remains a vital part of air traffic control operations, and communication problems can result in hazardous situations. ATM modernization requests for a paradigm shift away from voice towards digital data communications. In the paper the reliability of ATM voice communication system (VCS) on the base of embedded cloud technology is discussed. Mathematical model of the channel reliability in the ATM communication network based on embedded cloud technology in the real conditions of operation is developed. On the base of developed model the boundary value of reliability parameters for automatics of VCS embedded cloud in different failure modes is analyzed.

Igor Kabashkin
Towards a Model-Based Architecture for Road Traffic Management Systems

The transport domain is expected to substantially profit from the upcoming Internet of Things. Road Traffic Management Systems (RTMSs) constitute Cyber-physical Systems (CPSs) that collect and provide data of traffic events, e.g. for controlling and monitoring purposes. However, CPSs pose challenges like modifiability, heterogeneity and flexibility that are crucial for RTMSs.In this paper we propose an approach based on the Model-driven Engineering paradigm to address these challenges when implementing and operating RTMSs. Thereby, we specify metamodels for RTMS software components and identify composition relationships between these as well as constructs for runtime modeling. A Domain-specific Language (DSL) based on one of the metamodels might thus reuse elements expressed in a DSL based on another metamodel, while the resulting system of DSLs allows to model various RTMS aspects at both design time and runtime.

Florian Rademacher, Mirco Lammert, Marius Khan, Sabine Sachweh
Ultra Narrow Band Radio Technology in High-Density Built-Up Areas

The Internet of Things (IoT) is an ever mentioned phenomena which is closely linked to a rapidly growing Low-Power Wide-Area Networks (LPWAN). Although LPWANs are already widely deployed the performance under real-life conditions of the communication system such as Ultra Narrow Band (UNB) radio technology has not been deeply studied yet. Smart City applications of IoT imply low energy consumption, high spectrum efficiency, and comprehensive security hence the range limits are very constrained and must be verified before deployment is started. In this work, authors verify UNB technology called SIGFOX and identify technological constraints. These constraints allow us to formulate a measurement methodology combining range measurements in an urban area to show transmission characteristics of SIGFOX technology. The subsequent evaluation of the technology using reference/commercially available SIGFOX hardware in real urban areas shows the impacts to possible applications of this novel UNB technology.

Radim Kalfus, Tomáš Hégr
Change Impact in Product Lines: A Systematic Mapping Study

A product line (PL) supports and simplifies the development process of (software) systems by reusing assets. As systems are subjected to frequent alterations, the implementation of this changes can be a complex and error-prone task. For this reason a change impact analysis (CIA) systematically identifies locations that are affected by a change. While both approaches (PL and CIA) per se are often discussed in literature, the combination of them is still a challenge. This paper gives a comprehensive overview of literature, which addresses the integration of PL and CIA concepts. Furthermore, we classify our results to outline both, the current research stage as well as gaps. Therefore, we conducted a systematic mapping study incorporating 165 papers. While most of the papers have their background within Software Product Lines (SPLs) (44.2 %) or PLs (5.5 %), CIA in the combination with Multi Product Lines (2.4 %) or Product Families (PFs) (1.8 %) is sparsely addressed in literature. The results show that CIA for SPLs has been partially addressed yet, whereas the consideration of different disciplines (PFs) is insufficiently covered.

Christopher Brink, Philipp Heisig, Fabian Wackermann
Enhancing City Transportation Services Using Cloud Support

Smart cities will provide enhanced monitoring of crucial infrastructure resources, connectivity to users and advanced information services. Thanks to gathered data the quality of traditional services and infrastructures will be improved, and further services can emerge, such as novel urban transportation services. This paper devises a solution that enforces the cooperation between mobile devices and cloud infrastructures with the aim to bring public transportation where the people need it. Thanks to smart phones, sensing user locations, a request for transportation vehicles can be sent to a cloud-based intelligence, which filters and serves requests according to available transport routes, and their adaptation to user needs. Then, the available transportation vehicles will be timely alerted to operate accordingly.

Andrea Fornaia, Christian Napoli, Giuseppe Pappalardo, Emiliano Tramontana
Evaluation of a Mobility Approach to Support Vehicular Applications Using a Realistic Simulation Framework

The connected vehicle is becoming a reality. Internet access onboard will indeed increase road safety and security thanks to the cooperative networking that it is expected among vehicles, roadside units and the Internet. Moreover, this connectivity will bring innovative driving assistance services and infotainment alike services for end users. This fact is endorsed by standardisation bodies like the ETSI or the 5G-PPP that are actively working on the definition of these innovative services and setting their requirements. The connected vehicle poses technological challenges that need to be addressed. The mobility has to be managed regardless the location of the vehicles to ensure connectivity. At the same time the required security and performance levels for the applications need to be ensured. In this paper, we present a realistic simulation framework to evaluate vehicular applications while the protocol to manage the mobility, NeMHIP, is running underneath. The simulation framework is based on the integration of the OMNeT++, SUMO, VEINS and VsimRTI simulation tools. Obtained results have been compared with the requirements defined by the 5G-PPP automotive white paper, ITU-T Y.1541 and 3GPP TS 22.105 standards with satisfactory results. Thus, we demonstrate that the NeMHIP protocol is suitable because it fulfils the requirements of the applications while it provides an essential mobility service. In addition, this work shows the validity of the simulation framework.

Nerea Toledo, Marivi Higuero, Maider Huarte, Juanjo Unzilla
An Usable Application for Authentication, Communication and Access Management in the Internet of Things

The following paper introduces a secure and efficient application concept that is capable of authenticating and accessing smart objects. The concept is based on two already developed applications. It describes the used technologies and discusses the outcome and potential downfalls of the idea.

Matteo Cagnazzo, Markus Hertlein, Norbert Pohlmann
A Case Study on Self-configuring Systems in IoT Based on a Model-Driven Prototyping Approach

[Context and motivation] In the last years, the development of the Internet of Things (IoT) with self-configuring systems (SCS) became more important. Consequently, many different solutions have been developed. [Question/problem] We observed a lack of common benchmarks, in particular for the IoT domain to evaluate and compare solutions. There are very few accessible cases (examples) for SCSs published at all. [Principal ideas/results] We propose a case from the IoT domain, smart cities in particular, which comprises of hardware and software components. Starting point is a smart street lighting system with communication between the lamps and passing cars. [Contribution] First, in this paper we present our initial results of running a case study with a model-based prototyping framework on the smart street light. The framework includes a software simulation of the street lamp and the events from the passing cars. Second, an engineer can use the case as a benchmark to compare several approaches in order to make a more informed decision which approach to choose.

Fabian Kneer, Erik Kamsties

Information Technology Applications: Regular Session on Information Technology Applications

Frontmatter
Minimization of Numerical Dispersion Errors in 2D Finite Element Models of Short Acoustic Wave Propagation

Numerical dispersion errors are inherent for simulations based on wave propagation models with discrete meshes. The paper presents approach applied to reduce errors of this type in finite element models. High order 2D synthesized finite elements with enhanced convergence properties are obtained by modal synthesis technique. Obtained elements have diagonal mass matrix which enables to employ explicit integration schemes for wave simulation. Waves of more than two times wider frequency can be simulated using model of synthesized elements compared to models assembled of conventional elements. Furthermore, such elements could be used as a template of higher-order element to construct finite element models for all simulation problems of this kind.

Andrius Kriščiūnas, Rimantas Barauskas, Liudas Mažeika, Tautvydas Fyleris
On Methodology of E-wallet Construction for Partially Off-line Payment System

We propose a methodology for the construction of e-wallet with off-line divisible e-cash, with such properties as anonymity against vendor and full traceability from bank. Since this system is fully controlled by bank from the issuance of e-money to e-cash deposit, the prevention of an overpayment and the detection of a dishonest user are provided.Proposed system prevents the serious drawback of existing anonymous and divisible e-cash systems noticed by Chaum, namely the growth of the amount of information during e-cash transfers among the users. The prevention of this issue is achieved by sacrificing such valuable properties of existing e-cash systems as an honest user’s anonymity against bank and off-line deposit.The proof of the proposed construction’s security is provided.

Jonas Muleravičius, Eligijus Sakalauskas, Inga Timofejeva
Backmatter
Metadaten
Titel
Information and Software Technologies
herausgegeben von
Giedre Dregvaite
Robertas Damasevicius
Copyright-Jahr
2016
Electronic ISBN
978-3-319-46254-7
Print ISBN
978-3-319-46253-0
DOI
https://doi.org/10.1007/978-3-319-46254-7