Skip to main content

2016 | Buch

Databases and Information Systems

12th International Baltic Conference, DB&IS 2016, Riga, Latvia, July 4-6, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 12th International Baltic Conference on Databases and Information Systems, DB&IS 2016, held in Riga, Latvia, in July 2016.

The 25 revised full papers presented were carefully reviewed and selected from 62 submissions. The papers are organized in topical sections on ontology, conceptual modeling and databases; tools, technologies and languages for model-driven development; decision support systems and data mining; advanced systems and technologies; business process modeling and performance measurement; software testing and quality assurance; linguistic components of IS; information technology in teaching and learning.

Inhaltsverzeichnis

Frontmatter

Ontology, Conceptual Modeling and Databases

Frontmatter
Towards Self-explanatory Ontology Visualization with Contextual Verbalization
Abstract
Ontologies are one of the core foundations of the Semantic Web. To participate in Semantic Web projects, domain experts need to be able to understand the ontologies involved. Visual notations can provide an overview of the ontology and help users to understand the connections among entities. However, the users first need to learn the visual notation before they can interpret it correctly. Controlled natural language representation would be readable right away and might be preferred in case of complex axioms, however, the structure of the ontology would remain less apparent. We propose to combine ontology visualizations with contextual ontology verbalizations of selected ontology (diagram) elements, displaying controlled natural language (CNL) explanations of OWL axioms corresponding to the selected visual notation elements. Thus, the domain experts will benefit from both the high-level overview provided by the graphical notation and the detailed textual explanations of particular elements in the diagram.
Renārs Liepiņš, Uldis Bojārs, Normunds Grūzītis, Kārlis Čerāns, Edgars Celms
Self-service Ad-hoc Querying Using Controlled Natural Language
Abstract
The ad-hoc querying process is slow and error prone due to inability of business experts of accessing data directly without involving IT experts. The problem lies in complexity of means used to query data. We propose a new natural language- and semistar ontology-based ad-hoc querying approach which lowers the steep learning curve required to be able to query data. The proposed approach would significantly shorten the time needed to master the ad-hoc querying and to gain the direct access to data by business experts, thus facilitating the decision making process in enterprises, government institutions and other organizations.
Janis Barzdins, Mikus Grasmanis, Edgars Rencis, Agris Sostaks, Juris Barzdins
Database to Ontology Mapping Patterns in RDB2OWL Lite
Abstract
We describe the RDB2OWL Lite language for relational database to RDF/OWL mapping specification and discuss the architectural and content specification patterns arising in mapping definition. RDB2OWL Lite is a simplification of original RDB2OWL with aggregation possibilities and order-based filters removed, while providing in-mapping SQL view definition possibilities. The mapping constructs and their usage patterns are illustrated on mapping examples from medical domain: medicine registries and hospital information system. The RDB2OWL Lite mapping implementation is offered both via translation into D2RQ and into standard R2RML mapping notations.
Kārlis Čerāns, Guntars Būmans

Tools, Technologies and Languages for Model-Driven Development

Frontmatter
Models and Model Transformations Within Web Applications
Abstract
Unlike traditional single-user desktop applications, web applications have separated memory and computational resources (the client and the server side) and have to deal with multiple user accounts. This complicates the development process. Is there some approach of creating web applications without thinking about web-specific aspects, as if we are developing stand-alone desktop applications? We say, “yes”, and that is where models and model transformations come in handy. The proposed model-driven approach simplifies the development of web applications and makes it possible to use a single code base for deploying both desktop and web-based versions of the software.
Sergejs Kozlovics
Metamodel Specialization for DSL Tool Building
Abstract
Most of domain-specific tool building and especially diagram editor building nowadays involves some usage of metamodels. However normally the metamodel alone is not sufficient to define an editor. Frequently the metamodel just defines the abstract syntax of the domain, mappings or transformations are required to define the editor. Another approach [8] is based on a fixed type metamodel, there an editor definition consists of an instance of this metamodel to be executed by an engine. However there typically a number of functionality extensions in a transformation language is required. The paper offers a new approach based on metamodel specialization. First the metamodel specialization based on UML class diagrams and OCL is defined. A universal metamodel and an associated universal engine is described, then it is shown how a specific editor definition can be obtained by specializing this metamodel. Examples of a flowchart editor and UML class diagram editor are given.
Audris Kalnins, Janis Barzdins
Models of Event Driven Systems
Abstract
This paper provides the business process modeling approach based on usage of Domain Specific Languages (DSL). The proposed approach allows us to create executable information systems’ models and extends the concept of Event Driven Architecture (EDA) with the business process execution description. It lets us apply principles of the Model Driven Development (MDD) in order to create the information system which complies with the model. The proposed approach provides a set of advantages in information systems development, use and maintenance: bridges the gap between business and IT, an exact specification, which is easily to implement into information system, up-to-date documentation etc. The practical experience proves the viability of the proposed approach.
Zane Bicevska, Janis Bicevskis, Girts Karnitis
DSML Tool Building Platform in WEB
Abstract
The paper discusses how to build DSML tool building platform in WEB. Previously this was not possible due to the limitations of the browsers to render graphical diagrams but the technologies have evolved and currently the limitations are eliminated. Basically, the platform consists of three components – Presentation, engine, Interpreter and the Configurator. The paper gives an explanation what are the tasks for each of the component and how they interact with each other. To demonstrate a tool building process, a building of a simple flowchart editor is presented.
Arturs Sprogis

Decision Support Systems and Data Mining

Frontmatter
Algorithms for Extracting Mental Activity Phases from Heart Beat Rate Streams
Abstract
The paper presents algorithms for automatic detection of non-stationary periods of cardiac rhythm during professional activity. While working and subsequent rest operator passes through the phases of mobilization, stabilization, work, recovery and the rest. The amplitude and frequency of non-stationary periods of cardiac rhythm indicates the human resistance to stressful conditions. We introduce and analyze a number of algorithms for non-stationary phase extraction: the different approaches to phase preliminary detection, thresholds extraction and final phases extraction are studied experimentally. These algorithms are based on local extremum computation and analysis of linear regression coefficient histograms. The algorithms do not need any labeled datasets for training and could be applied to any person individually. The suggested algorithms were experimentally compared and evaluated by human experts.
Alina Dubatovka, Elena Mikhailova, Mikhail Zotov, Boris Novikov
Scheduling Approach for Enhancing Quality of Service in Real-Time DBMS
Abstract
Applications are increasingly characterized by manipulating large amounts of data and by time constraints to which are submitted data and treatments. RTDBMSs (Real-Time DataBase Management Systems) are an appropriate formalism to handle such applications. However, a RTDBMS often goes through overload periods following the unexpected arrival of user transactions. During such periods, transactions are more likely to miss their deadlines and that directly affects the QoS (Quality of Service) provided to users. Thus, our work is to propose a new scheduling protocol to optimize the execution of transactions without exceeding their deadlines. It consists on assigning priorities to transactions based both on their deadlines, their arrival dates and their priority levels defined by users. Also, we show that our approach can maximize the number of successful transactions, in particular those classified as critical for users. The obtained results are compared with conventional scheduling approaches.
Fehima Achour, Emna Bouazizi, Wassim Jaziri
A Comparative Analysis of Algorithms for Mining Frequent Itemsets
Abstract
Finding frequent sets of items was first considered critical to mining association rules in the early 1990s. In the subsequent two decades, there have appeared numerous new methods of finding frequent itemsets, which underlines the importance of this problem. The number of algorithms has increased, thus making it more difficult to select proper one for a particular task and/or a particular type of data. This article analyses and compares the twelve most widely used algorithms for mining association rules. The choice of the most efficient of the twelve algorithms is made not only on the basis of available research data, but also based on empirical evidence. In addition, the article gives a detailed description of some approaches and contains an overview and classification of algorithms.
Vyacheslav Busarov, Natalia Grafeeva, Elena Mikhailova
A WebGIS Application for Cloud Storm Monitoring
Abstract
Extreme weather phenomena (i.e. heavy precipitation, hail and lightings) frequently cause damages in properties and agricultural production and usually originate from the cloud storms. Automated systems able to provide timely and accurate monitoring and predictions would contribute to prevent the effects of physical disasters and reduce economic losses. Nowadays, meteorological satellites have a significant role in weather monitoring and forecasting, providing accurate and high resolution data. Such data can be analyzed using Geographical Information Systems (GIS) and modern web technologies to develop integrated automated web based monitoring systems. This study describes a WebGIS application focused on monitoring and forecasting cloud tops of storm evolution. The application has developed using modern tools, to exploit their features through an innovative web based monitoring system. There are used open source framework to ensure mobility, stability and portability of the application.
Stavros Kolios, Dimitrios Loukadakis, Chrysostomos Stylios, Andreas Kazantzidis, Aleksandr Petunin

Advanced Systems and Technologies

Frontmatter
Self-management of Information Systems
Abstract
The paper discusses self-management features that are intended to support the usage and maintenance processes in the information system life. Instead of a universal solutions that are evolved by many researchers in the autonomic computing field, this approach, called smart technologies, anticipates self-management features by including autonomic components into information systems directly. The approach is practically applied in several information systems, and the gained results show that the implementation of self-management features requires relatively modest resources. Thereby the approach is suitable even for smaller projects and companies.
Janis Bicevskis, Zane Bicevska, Ivo Oditis
On the Smart Spaces Approach to Semantic-Driven Design of Service-Oriented Information Systems
Abstract
Smart spaces define a development approach to creation of service-oriented information systems for computing environments of the Internet of Things (IoT). Semantic-driven resource sharing is applied to make fusion of the physical and information worlds. Knowledge from both worlds is selectively encompassed in the smart space to serve for users’ needs. In this paper, we consider several principles of the smart spaces approach to semantic-driven design of service-oriented information systems. The developers can apply these principles to achieve such properties as (a) involvement for service construction many surrounding devices and personal mobile devices of the user, (b) use of external Internet services and data sources for enhancing the constructed services, (c) information-driven programming of service construction based on resource sharing. The principles are derived from our experience on the software development for such application domains as collaborative work, e-tourism, and mobile health.
Dmitry Korzun
Host Side Caching: Solutions and Opportunities
Abstract
Host side caches use a form of storage faster than disk and less expensive than DRAM to deliver the speed demanded by data intensive applications, e.g., NAND Flash. A host side cache may integrate into an existing application seamlessly using an infrastructure component (such as a storage stack middleware or the operating system) to intercept the application read and write requests for disk pages, populate the flash cache with disk pages, and use the flash to service read and write requests intelligently. This study provides an overview of host side caches, an analysis of its overhead and costs to justify its use, alternative architectures including the use of the emerging Non Volatile Memory (NVM) for the host-side cache, and future research directions. Results from Dell’s Fluid Cache demonstrate it enhances the performance of a social networking workload from a factor of 3.6 to 18.
Shahram Ghandeharizadeh, Jai Menon, Gary Kotzur, Sujoy Sen, Gaurav Chawla
Conclusions from the Evaluation of Virtual Machine Based High Resolution Display Wall System
Abstract
There are several approaches to the construction of large scale high resolution display walls depending on the required use case. Some require support of 3D acceleration APIs like OpenGL, some require stereoscopic projection. Others simply require a surface with a very high display resolution. The authors of this paper have developed a virtual machine based high resolution display wall architecture that works for all planar projection use cases and does not require a custom integration. The software that generates the presented content is executed in a virtual machine thus no specific APIs other than those of the virtualized OS are required. Any software that is able to run under a given OS can be run on this display wall architecture without modifications. The authors have performed performance evaluations, virtualization environment comparisons and comparisons among other display wall architectures. All this knowledge along with key conclusions is summarized in this paper.
Rudolfs Bundulis, Guntis Arnicans

Business Process Modeling and Performance Measurement

Frontmatter
The Enterprise Model Frame for Supporting Security Requirement Elicitation from Business Processes
Abstract
It is generally accepted that security requirements have to be elicited as early as possible to avoid later rework in the systems development process. One of the reasons for difficulties of early detection of security requirements is the complexity of security requirements identification. In this paper we propose an extension of the method for security requirements elicitation from business processes (SREBP). The extension includes the application of the enterprise model frame to capture enterprise views and relationships of the analysed system assets. Although the proposal was used in some practical settings, the main goal of this work is conceptual discussion of the proposal. Our study shows that (i) the enterprise model frame covers practically all concepts of the information security related definitions, and that (ii) the use of the frame with the SREBP method complies with the common enterprise modeling and enterprise architecture approaches.
Marite Kirikova, Raimundas Matulevičius, Kurt Sandkuhl
Knowledge Management Performance Measurement: A Generic Framework
Abstract
This theoretical article aims to propose a generic framework for measuring performance of Knowledge Management (KM) projects based on critical literature review. The proposed framework fills the existing gap on KM performance measurement in two points: (i) it provides a generic tool that is able to assess all kinds of KM project as well as the overall organization KM, (ii) it assesses KM projects according to KM objectives in a generic manner. Our framework (GKMPM) relies on a process reference model that provides a KM common understanding in a process based view. It is based on a goal-oriented measurement approach and considers that KM performance dimensions are stakeholder’s objectives. The framework application follows a procedural approach that begins with the KM project modelling, followed by the objectives prioritization. The next step consists of collecting and analysing data for pre-designed measures, and produces a set of key performance indicators (KPIs) related to the KM project processes and in accordance with its objectives.
Latifa Oufkir, Mounia Fredj, Ismail Kassou

Software Testing and Quality Assurance

Frontmatter
A Study on Immediate Automatic Usability Evaluation of Web Application User Interfaces
Abstract
More and more web applications are being migrated from desktop platforms to mobile platforms. User experience is extremely different on desktop and portable devices. Changes in user interfaces (UI) could lead to severe violations of usability rules, e.g. changing the text color could lead to decrease of accessibility for users with low vision or cognitive impairments. Manual usability inspection methods are the approaches that help to verify the usability conformance to guidelines. Nevertheless, there are number of difficulties why the aforementioned approaches could not be always applied. The purpose of our research is to develop a conceptual model and corresponding framework including category specific metrics with methodology for immediate automatic usability evaluation of web application user interfaces during design and implementation phase. We address the gap between usability evaluation and development stage of user interface by providing immediate feedback to UI developers.
Jevgeni Marenkov, Tarmo Robal, Ahto Kalja
Model-Based Testing of Real-Time Distributed Systems
Abstract
Modern financial systems have grown to the scale of global geographic distribution and latency requirements are measured in nanoseconds. Low-latency systems where reaction time is primary success factor and design consideration, are serious challenge to existing integration and system level testing techniques. While existing tools support prescribed input profiles they seldom provide enough reactivity to run the tests with simultaneous and interdependent input profiles at remote frontends. Additional complexities emerge due to severe timing constraints the tests have to meet when test navigation decision time ranges near the message propagation time. Sufficient timing conditions for remote online testing have been proven by Larsen et al. and implemented in \(\varDelta \)-testing method recently. We extend the \(\varDelta \)-testing by deploying testers on fully distributed test architecture. This approach reduces the test reaction time by almost a factor of two. We validate the method on a distributed time-sensitive global financial system case study.
Jüri Vain, Evelin Halling, Gert Kanter, Aivo Anier, Deepak Pal

Linguistic Components of IS

Frontmatter
Detection of Multiple Implicit Features per Sentence in Consumer Review Data
Abstract
With the rise of e-commerce, online consumer reviews have become crucial for consumers’ purchasing decisions. Most of the existing research focuses on the detection of explicit features and sentiments in such reviews, thereby ignoring all that is reviewed implicitly. This study builds, in extension of an existing implicit feature algorithm that can only assign one implicit feature to each sentence, a classifier that predicts the presence of multiple implicit features in sentences. The classifier makes its prediction based on a score function and is trained by means of a threshold. Only if this score exceeds the threshold, we allow for the detection of multiple implicit feature. In this way, we increase the recall while limiting the decrease in precision. In the more realistic scenario, the classifier-based approach improves the \(F_1\)-score by 1.6 % points on a restaurant review data set.
Nikoleta Dosoula, Roel Griep, Rick den Ridder, Rick Slangen, Kim Schouten, Flavius Frasincar
K-Translate - Interactive Multi-system Machine Translation
Abstract
The tool described in this article has been designed to help machine translation (MT) researchers to combine and evaluate various MT engine outputs through a web-based graphical user interface using syntactic analysis and language modelling. The tool supports user provided translations as well as translations from popular online MT system application program interfaces (APIs). The selection of the best translation hypothesis is done by calculating the perplexity for each hypothesis. The evaluation panel provides sentence tree graphs and chunk statistics. The result is a syntax-based multi-system translation tool that shows an improvement of BLEU scores compared to the best individual baseline MT. We also present a demo server with data for combining English - Latvian translations.
Matīss Rikters
Web News Sentence Searching Using Linguistic Graph Similarity
Abstract
As the amount of news publications increases each day, so does the need for effective search algorithms. Because simple word-based approaches are inherently limited, ignoring much of the information in natural language, in this paper we propose a linguistic approach called Destiny, which utilizes this information to improve search results. The major difference from approaches that represent text as a bag-of-words is that Destiny represents sentences as graphs, with words as nodes and the grammatical relations between words as edges. The proposed algorithm is evaluated using a custom corpus of user-rated sentences and compared to a TF-IDF baseline, performs significantly better in terms of Mean Average Precision, normalized Discounted Cumulative Gain, and Spearman’s Rho.
Kim Schouten, Flavius Frasincar

Information Technology in Teaching and Learning

Frontmatter
Heuristic Method to Improve Systematic Collection of Terminology
Abstract
In this paper, we propose an experimental tool for analysis and graphical representation of glossaries. The original heuristic algorithms and analysis methods incorporated into the tool appeared to be useful to improve the quality of the glossaries. The tool was used for analysis of ISTQB Standard Glossary of Terms Used in Software Testing. There are instances of problems found in ISTQB glossary related to its consistency, completeness, and correctness described in the paper.
Vineta Arnicane, Guntis Arnicans, Juris Borzovs
The Application of Optimal Topic Sequence in Adaptive e-Learning Systems
Abstract
In an adaptive e-learning system an opportunity to choose a course topic sequence is given to ensure personalization. The topic sequence can be obtained from three sources: teacher-offered topic sequence that is based on teacher’s pedagogical experience; learner’s free choice that is based on indicated links between topics, and, finally, the optimal topic sequence acquisition method described in this article. The optimal topic sequence is based on previous learners’ experience. With the help of the optimal topic sequence method, data about previous learners’ course topic sequence and course results are obtained. After the data analysis the optimal topic sequence for the specific course is obtained based on the links between course topics. In this article the experimental test of this method is described.
Vija Vagale, Laila Niedrite
Initial Steps Towards the Development of Formal Method for Evaluation of Concept Map Complexity from the Systems Viewpoint
Abstract
An advantage of intelligent tutoring systems (ITS) is their ability to adapt to the current knowledge level of each learner by offering most suitable tasks for him/her at the current phase of learning process. The main problem is how to evaluate the degree of task difficulty, which, as a rule, is done subjectively. The paper presents the first results of ongoing research, the final goal of which is to develop the formal method for evaluation of the degree of concept map-based task difficulty. The basic idea is to interpret and use for evaluation of concept map (CM) complexity the four aspects applied for estimation of systems complexity – the number of system’s elements and relationships between them, attributes of systems, their elements, and relationships, and the organizational degree of systems. The proposed approach is described using as an example relatively simple hierarchical CMs which complexity as systems is estimated.
Janis Grundspenkis
Backmatter
Metadaten
Titel
Databases and Information Systems
herausgegeben von
Guntis Arnicans
Vineta Arnicane
Juris Borzovs
Laila Niedrite
Copyright-Jahr
2016
Electronic ISBN
978-3-319-40180-5
Print ISBN
978-3-319-40179-9
DOI
https://doi.org/10.1007/978-3-319-40180-5