Skip to main content

2013 | Buch

Information and Software Technologies

19th International Conference, ICIST 2013, Kaunas, Lithuania, October 2013. Proceedings

herausgegeben von: Tomas Skersys, Rimantas Butleris, Rita Butkiene

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 19th International Conference on Information and Software Technologies, ICIST 2013, held in Kaunas, Lithuania, in October 2013. The 34 papers presented were carefully reviewed and selected from 60 submissions. The papers focus on the following topics: information systems, business intelligence, software engineering, and IT applications.

Inhaltsverzeichnis

Frontmatter

Information Systems

Analysis of Control System with Delay Using the Lambert Function

The mathematical model of the mutual synchronization system, having ring form structure and composed of

n

(

n

 ∈ 

N

) oscillators, is investigated. The mathematical model of the system is the matrix differential equation with delayed argument. The solution of the matrix differential equation with delayed argument is obtained applying the Lambert function method. Using obtained solution, the transients in the system are examined. The results of calculations, received by the Lambert function method, are compared with the results, obtained by the exact method of consequent integration.

Irma Ivanovienė, Jonas Rimas
The Quality Management Metamodel in the Enterprise Architecture

The paper presents the methodology for determining, management, simulation and optimization of the quality of an enterprise architecture based on defined by the author of two metamodels: classes and processes for quality management of this architecture. The second of them (the process metamodel) of quality management developed in BPMN has undergone simulation and optimization using ARIS Business Process Simulator. The results of this simulation and optimization are presented in the article. The presented research method developed by the author, and the results are related to and are a creative extension of the following ISO standards: ISO/IEC 24744 [1], ISO/IEC 12207 [2], ISO / IEC 42010 [3] and the well-known methodologies: ”A Framework for Information Systems Architecture”. - Zachman, J. A., and TOGAF: “The Open Group Architecture Framework”) [12].

Jerzy Roszkowski, Agata Roszkowska
Ontology Matching Using TF/IDF Measure with Synonym Recognition

Ontology matching is an important process for integration of heterogeneous data sources. A large number of different matchers for comparing ontologies exist. They can be classified into element-level and structure-level matchers. The element-level matchers compare entities ignoring their relations with other entities, while the structure-level matchers consider these relations. The TF/IDF (term frequency / inverse document frequency) measure is useful for specifying key terms weights in documents. In our matching system we use the TF/IDF measure for comparing documents that store data about ontology entities. However, the TF/IDF does not take synonyms into account, and it may occur that the terms that describe two entities the best are synonyms. In this paper we propose a matcher that combines the TF/IDF measure with synonym recognition when determining key term weights, in order to improve the results of ontology matching. Evaluation of the matcher is performed on case study examples.

Marko Gulić, Ivan Magdalenić, Boris Vrdoljak
Moving Averages for Financial Data Smoothing

For a long time moving averages has been used for a financial data smoothing. It is one of the first indicators in technical analysis trading. Many traders debated that one moving average is better than other. As a result a lot of moving averages have been created. In this empirical study we overview 19 most popular moving averages, create a taxonomy and compare them using two most important factors – smoothness and lag. Smoothness indicates how much an indicator change (angle) and lag indicates how much moving average is lagging behind the current price. The aim is to have values as smooth as possible to avoid erroneous trades and with minimal lag – to increase trend detection speed. This large-scale empirical study performed on 1850 real-world time series including stocks, ETF, Forex and futures daily data demonstrate that the best smoothness/lag ratio is achieved by the Exponential Hull Moving Average (with price correction) and Triple Exponential Moving Average (without correction).

Aistis Raudys, Vaidotas Lenčiauskas, Edmundas Malčius
Knowledge Transfer in Management Support System Implementation

Knowledge transfer during management support system implementations is one of the key elements of project success. The scope of this article is to present the results of research on the methods of completing knowledge transfer for specific implementation phases in selected groups of IT projects consisting in implementing ERP, CRM, BI and DMS class IT systems. The results may be interesting for researchers specialising in the subject of IT project implementation and for practitioners completing IT projects.

Bartosz Wachnik
Collective Intelligence Utilization Method Based on Implicit Social Network Composition and Evolution in the Scope of Personal Learning Environment

Personal Learning Environment (PLE) is an emerging concept in learning technology field. PLE allows users aggregate content from distributed Web 2.0 services in one place and arrange it in a way that is convenient for a learner. Despite the fact, that PLE operates with social software and is a type of social media, social networking component is used very poorly. A new model of “networked knowledge” utilization in the scope of PLE is presented in this paper. The exclusive feature of this model is learners’ aggregated and generated data analysis and digital identity development based on both sources. Another exclusive feature is constant digital identity update, depending on constantly evolving learners’ implicit network. Such evolution ensures continuous implicit network update along with changing learners’ interests.

Genadijus Kulvietis, Andrej Afonin, Danguole Rutkauskiene
Automation of Upgrade Process for Enterprise Resource Planning Systems

This paper presents a framework for semi-automatic process of enterprise resource planning (ERP) system upgrade. We suggest to change currently accepted practice of manual upgrade process when domain expert-programmer works through all localizations and transforms them manually to the new version of ERP system. The core idea for this framework is to induce the software code transformation patterns from completed upgrade projects and then to refine these patterns by using knowledge of ERP upgrade expert. These patterns lets us to increase productivity of upgrade process by improving automatic code alignment and annotation and by providing code transformation to the new version of ERP system. The price for these improvements is a requirement for upgrade expert to move from traditional 4/GL ERP programming language to stochastic meta-programming language which is used to describe code alignment and code transformation patterns.

Algirdas Laukaitis
Business Process Flow Verification Using Knowledge Based System

Analysis of business process flows presented in this paper constitutes three main activities: representation of business flows by AND/OR graphs, their transformation to Prolog clauses and verification in Prolog environment using created knowledge based system where deadlock and endless loop properties are defined. An ordering process verification example is used for illustration of the approach proposed.

Regina Miseviciene, Germanas Budnikas, Dalius Makackas
Web-Based Analytical Information System for Spatial Data Processing

It is considered an actual task of creation of analytical information system for spatial data processing. Currently research organisations, government agencies and local (municipal) governments have accumulated, update and use large amounts of scientific spatial data. As a rule data presented in different formats, which don’t allow effective processing of them. It is proposed a development of analytical information system for spatial data processing which is represented in special web-based resource as GeoPortal. This kind of analytical information resource helps to ensure interoperability between different actors in the information exchange and improve the quality of researches.

Viacheslav Paramonov, Roman Fedorov, Gennagy Ruzhnikov, Alexandr Shumilov
System Architecture Model Based on Service-Oriented Architecture Technology

This paper involves the research of the tools that have a positive effect on the quality, effectiveness and value of eLearning. Due the complexity of actual software systems, including web portals and networks, it is becoming more and more difficult to develop software systems suitable for their intended usage. To tackle this problem, we can develop an integrated system for curriculum planning and delivery by using new technologies with a range of individualized constructive learning strategies and social skills acquired through constant communication, active sharing of knowledge and experience, joint activities in various groups, teamwork and training (learning) environments and social networks, with development and evaluation of the work performance.

Tarkan Gurbuz, Daina Gudoniene, Danguole Rutkauskiene
Towards the Combination of BPMN Process Models with SBVR Business Vocabularies and Rules

Combination capabilities of BPMN and SBVR are analyzed in this paper. In order to combine these two standards we have to analyze current proposals. Process modeling focuses on visualization of process with specific notation. However, today there is a need to have business rules separately from process in order to reduce the size of the process model and avoid misunderstandings and miscommunications between analysts and domain experts or between organizations. Therefore, there is a need to combine business process management and business rule management in one user-friendly environment.

Eglė Mickevičiūtė, Rimantas Butleris
Process for Applying Derived Property Based Traceability Framework in Software and Systems Development Life Cycle

For implementing the idea of applying derived properties for tracing project artifacts, the Derived Property Based Traceability Framework was created that consists of Model-Driven Domain Specific Language (DSL) engine for extending UML with derived property specifications, traceability schemas, and traceability analysis means. Traceability schemas may be generic, suitable for every purpose, but they often are characteristic to a development method, modeling language or a particular project. The paper presents a process for applying the Derived Property Based Traceability Framework consisting of three parts: process for adapting Derived Property Based Traceability solution for development method or Domain Specific Language; process for applying the solution in a development process, and process for automating the maintenance of traceability relations. Process is illustrated with examples from several case studies.

Saulius Pavalkis, Lina Nemuraite
Developing SBVR Vocabularies and Business Rules from OWL2 Ontologies

Semantics of Business Vocabulary and Business Rules (SBVR) is OMG adopted metamodel allowing defining noun concepts, verb concepts and business rules of a problem domain in structured natural language based on formal logics. SBVR business vocabulary and business rules are capable of representing ontologies. There are some research works devoted to transforming SBVR into Web Ontology Language OWL2. The reverse way of representing ontology concepts with SBVR structured language was not investigated though there are much more ontologies than SBVR vocabularies. Our research is concentrated on methodology for creating SBVR vocabularies and rules from OWL2 ontologies without a loss of the expressive power, characteristic for ontologies, as some ontology-specific concepts have no direct representation in SBVR. The particular attention is devoted to applying SBVR vocabulary in semantic search.

Gintare Bernotaityte, Lina Nemuraite, Rita Butkiene, Bronius Paradauskas
Exploring Key Factors of Pilot Projects in Agile Transformation Process Using a Grounded Theory Study

Changing development approach from disciplined to agile methods is an organizational mutation that requires many issues to be considered to increase its chance of success. Selecting an appropriate pilot project as initial project that is going to be done through an Agile method is a critical task. Due to the impact of such a pilot project on successful Agile transformation, understanding its critical factors helps organizations choose the most suitable project to start Agile transition. Conducting a Grounded Theory, showed that organization should considered some key factors of a pilot: Criticality, Duration, Size and Required resources. Besides these factors, the results showed that organization should be aware of the risk of successful pilot project in their next Agile projects. The study also showed that pilot selection mostly is done by Agile coaches or is forced by customer.

Taghi Javdani Gandomani, Hazura Zulzalil, Abdul Azim Abd Ghani, Abu Bakar Md. Sultan, Khaironi Yatim Sharif
Incompleteness in Conceptual Data Modelling

Although conceptual data modelers can ”get creative” when designing entities and relationships to meet business requirements, they are highly constrained by the business rules which determine the details of how the entities and relationships combine. Typically, there is a delay in realising which business rules might be relevant and a further delay in obtaining an authoritative statement of these rules. We identify circumstances under which viable database designs can be constructed from conceptual data models which are incomplete in the sense that they lack this “infrastructural” detail normally obtained from the business rules. As such detail becomes available, our approach allows the conceptual model to be incrementally refined so that each refinements can be associated with standard database refactorings, minimising the impact on database operations. Our incremental approach facilitates the implementation of the database earlier in the development cycle.

Peter Thanisch, Tapio Niemi, Jyrki Nummenmaa, Zheying Zhang, Marko Niinimäki, Pertti Saariluoma
Semi-supervised Learning of Action Ontology from Domain-Specific Corpora

The paper presents research results, showing how unsupervised and supervised ontology learning methods can be combined in an action ontology building approach. A framework for action ontology building from domainspecific corpus texts is suggested, using different natural language processing techniques, such as collocation extraction, frequency lists, word space model, etc. The suggested framework employs additional knowledge sources of WordNet and VerbNet with structured linguistic and semantic information. Results from experiments with crawled chemical laboratory corpus texts are given.

Irena Markievicz, Daiva Vitkute-Adzgauskiene, Minija Tamosiunaite

Business Intelligence for Information and Software Systems

Speech Keyword Spotting with Rule Based Segmentation

Speech keyword spotting is a retrieval of all instances of a given keyword in utterances. This paper presents improved template based keyword spotting algorithm. It solves speaker dependent speech segment detection in continuous speech with small vocabulary. The rules based segmentation algorithm allows to extract quasi-syllables. We evaluated the algorithm by experimental with synthetic signals. The algorithm results outperform classical keyword spotting algorithm with experimental data.

Mindaugas Greibus, Laimutis Telksnys
Business Intelligence Maturity Models: Information Management Perspective

While Business Intelligence (BI) plays a critical role for businesses in terms of organizational development and creating competitive advantages, many BI projects fail to fully deliver the features and benefits that could help organizations in their decision-making. Rather than depending on software, BI success relies on the capabilities of sensing for appropriate information, data collection, extraction, organization, analysis, and retention of information due to the large volume of information that exists.

Therefore, this paper presents a comprehensive review of existing BI maturity models and elaborates their methodical and conceptual characteristics to determine their gaps in addressing the information life-cycle concept in terms of sensing, collecting, organizing, processing, and maintaining activities. As a result, a conceptual framework is proposed from the literature analysis. The intentions are to build a BI maturity model that can be used to increase the success of BI implementation by basing it on Information Management Practice (IMP), which a model built on the information life-cycle concept.

Alaskar Thamir, Babis Theodoulidis
Modified Stochastic Algorithm for Mining Frequent Subsequences

The task of market basket analysis is one of the oldest areas of data mining, but still remains very relevant in today’s market. Supermarkets have enormous amounts of data about purchases and it is always important to know what items the market basket contains, how it fluctuates, whether it depends on a particular season, etc. In order to solve these tasks various data mining methods and algorithms are applied. One of them is discovering association rules. The article introduces the modified stochastic algorithm for mining frequent subsequences, as well as computer modeling results and conclusions are presented. The essence of the modified stochastic algorithm is to quickly discover frequent subsequences based on the 1-element subsequence discovered by the Apriori algorithm. In the algorithm the database is scanned once, frequent subsequences and association rules are discovered. The confidence of the algorithm is estimated applying probability statistical methods.

Loreta Savulioniene, Leonidas Sakalauskas
On Two Approaches to Constructing Optimal Algorithms for Multi-objective Optimization

Multi-objective optimization problems with expensive, black box objectives are difficult to tackle. For such type of problems in the single objective case the algorithms, which are in some sense optimal, have proved well suitable. Two concepts of optimality substantiate the construction of algorithms: worst case optimality and average case optimality. In the present paper the extension of these concepts to the multi-objective optimization is discussed. Two algorithms representing both concepts are implemented and experimentally compared.

Antanas Žilinskas
Recognition of Voice Commands Using Hybrid Approach

Computerized systems with voice user interfaces could save time and ease the work of healthcare practitioners. To achieve this goal voice user interface should be reliable (to recognize the commands with high enough accuracy) and properly designed (to be convenient for the user). The paper deals with hybrid approach implementation issues for the voice commands recognition. By the hybrid approach we assume the combination of several different recognition methods to achieve higher recognition accuracy. The experimental results show that most voice commands are recognized good enough but there is some set of voice commands which recognition is more complicated. In this paper the novel method is proposed for the combination of several recognition methods based on the Ripper algorithm. Experimental evaluation showed that this method allows achieve higher recognition accuracy than application of blind combination rule.

Vytautas Rudžionis, Kastytis Ratkevičius, Algimantas Rudžionis, Gailius Raškinis, Rytis Maskeliunas
Estimation of the Environmental Impact on the Accuracy of Signal Recognition

The problem of the random signals recognition system’s adaptation to the variable environmental conditions is discussed. The constructive method that demonstrates possibilities to create recognition systems able to adapt to changing working conditions

.

The efficiency of the method is demonstrated by experiment analyzing the recognition of random signals in environments with different characteristics. The results demonstrate that the suggested method also can be useful in the development of the speech recognition devices operating in various environments.

Gintarė Čeidaitė, Laimutis Telksnys

Software Engineering

Automated Method for Software Integration Testing Based on UML Behavioral Models

Nowadays, testing is often considered more important than the code itself. Therefore, in order to test large and complex systems test automation methods are needed, which help evaluating whether the software is working properly. The main goal of the research is to improve effectiveness of integration testing by creating an automated method based on UML behavioral models. Test input data generation using symbolic execution was applied and it gave full structural coverage, which increased the quality of integration testing. Testing method allowed automating the testing process and increased the effectiveness of tests in comparison with other methods. Experiments showed that 96% of all mutations were successfully detected, and automated test data generation based on symbolic execution increased the detection of mutants by 6-19% in comparison to other model-based testing methods.

Dominykas Barisas, Eduardas Bareiša, Šarūnas Packevičius
Computational Algorithmic Generation of High-Quality Colour Patterns

The purpose of this paper is to describe the computational algorithmic generation of high-quality colour patterns (digital halftones). At the beginning, the formal model for generation of the digital halftones, the so-called grey pattern problem (GPP) is introduced. Then, the heuristic algorithm for the solution of the grey pattern problem is discussed. Although the algorithm employed does not guarantee the optimality of the solutions found, still perfect quality, near-optimal (and in some cases probably optimal) solutions can be achieved within reasonable computation time. Further, we provide the preliminary results of the extensive computational experiments with the extra-large instance (data set) of the GPP. As a confirmation of the quality of the analytical solutions produced, we also give the visual representations of fine-looking graphic halftone patterns.

Alfonsas Misevičius, Evaldas Guogis, Evelina Stanevičienė
Design of Visual Language Syntax for Robot Programming Domain

The paper discusses the development of the visual language syntax based on the application of sound methodological principles, a visual communication model, a visual syntax model, a formal description of syntax based on visual grammar metalanguage (an extension of BNF) and ontology of visual signs (graphemes). The syntax of an illustrative visual language VisuRobo for the mobile robot programming domain is presented.

Ignas Plauska, Robertas Damaševičius
Testing Stochastic Systems Using MoVoS Tool: Case Studies

MoVoS tool is an implementation of testing theory based on stochastic refusals graph. It allows the automatic extraction of test cases from specification of stochastic systems. Those systems are modeled by Maximality based Labeled Stochastic Transition System “MLSTS”.

In This paper, we present the application of MoVoS tool on two cases studies to valid it. Those case studies permit us to illustrate the functionality of tool and to show that this tool can deal with large system.

Kenza Bouaroudj, Djamel-Eddine Saidouni, Ilham Kitouni
Two Scale Modeling of Heterogeneous Solid Body by Use of Thick Shell Finite Elements

An elasticity parameters evaluation for homogeneous material is considered in this paper if parameters of consisting materials are known in micro scale. The thick shell formulation for homogeneous orthotropic material is discussed and total Lagrangian formulation for the 4-node thick shell element in implicit and explicit analysis is described. The results of the thick shell model are compared with the results of 3D model and LS-Dyna shell model with the same loading.

Dalia Čalnerytė, Rimantas Barauskas
Development in Authentication of AODV Protocols to Resist the Attacks

Mobile Ad hoc networks (MANETs) are new wireless networks with a self-configuring and self-maintaining network characterized as dynamic topology. This work will discuss the existing approaches that have intended to create a defense against various attacks at different layers. Routing is a very important function in MANETs, as described earlier. Making routing protocols efficient often increases the security risk of the protocol and allows a single node to significantly impact the operation of the protocol because of the lack of protocol redundancy. We focus on our scheme on authentication between the nodes and we choose the on-demand routing protocol like Ad Hoc On-Demand Distance Vector (AODV) protocol to apply this scheme to, which it depends on hash function, hash lock and random number generation and also on secret value and time stamp. This scheme is used to produce security and authentication environment between the nodes in Mobile Ad Hoc Network.

Ahmad Alomari
Evaluation of Open Source Server-Side XSS Protection Solutions

Web protection against XSS attacks is an indispensable tool for implementing reliable online systems. XSS attacks can be used for various malicious actions and stealing important information. Protection may be implemented both on user computer and on server side. In this work we have analyzed the server side protection solutions. These solutions must ensure appropriate level of security and at the same time should not considerably increase page response time. The aim of this paper is to determine the most effective and safe free tools for protection against XSS attacks for web pages created using PHP, ASP.NET and Java technologies.

Jonas Ceponis, Lina Ceponiene, Algimantas Venckauskas, Dainius Mockus
Minimization of Numerical Dispersion Errors in Finite Element Models of Non-homogeneous Waveguides

The paper presents the approach for the reduction of numerical errors, which are inherent for simulations based on wave propagation models in discrete meshes. The discrete computational models always tend to generate errors of harmonic wave propagation velocities in higher frequency ranges, which can be treated as numerically-induced errors of dispersion curves. The result of the errors is the deterioration of the shapes of simulated waves as the time of simulation increases. The presented approach is based on the improvement of the matrices of elements of the finite element model by means of correction of the modal frequencies and modal shapes of an individual element. The approach developed by the authors earlier and proved to work in the case of a uniform waveguide now has been demonstrated to be valid for simulations of waves in networks of waveguides. The non-reflecting boundary conditions can be implemented by combining synthesized and lumped mass elements in the same model. The propagating wave pulses can be satisfactorily simulated in comparatively rough meshes, where only 6-7 finite elements per wavelength are used.

Andrius Krisciunas, Rimantas Barauskas
Novel Method to Generate Tests for VHDL

Verification is the most crucial part of the chip design process. Test benches, which are used to test VHDL code, need perform efficiently and effectively. We present an algorithm that achieves high code coverage by analyzing the finite state machine (FSM), and control flow graph (CFG) that are constructed from the source code. The symbolic execution of VHDL (Very-high-speed integrated Hardware Description Language) code is used as well. These three elements are combined into framework (TestBenchGen) written in Python programming language and evaluated against ITC’99 benchmark suite.

Vacius Jusas, Tomas Neverdauskas
Empirical Analysis of the Test Maturity Model Integration (TMMi)

Testing is an essential part of the software development lifecycle for gaining trust in the quality of the delivered product. Concerns have been raised over the maturity of existing test processes and techniques, and the desire to improve has been acknowledged. Even though there are test process improvement models available on the market, the guidelines on how to use them are unsatisfactory. This paper describes the empirical analysis of Test Maturity Model integration (TMMi) through a single-object case study of test process assessment and improvement planning conducted in Playtech Estonia’s Casino unit. An evaluation on the performance of TMMi is presented, raising also some concerns over its applicability in agile environments. Improvement possibilities for TMMi are described, which contribute to the potential enhancement of the framework.

Kerli Rungi, Raimundas Matulevičius
Behavior Analysis of Real-Time Systems Using PLA Method

The paper deals with a behavior analysis task of real-time system specified by the PLA method. An algorithm for creating a reachable state graph is used while solving for the task. The algorithm evaluates intervals of time when the defined system events occur. An approach based on the algorithm for the reachable state graph generation is presented within this paper. The suggested approach is illustrated by an example.

Dalius Makackas, Regina Miseviciene, Henrikas Pranevicius
Measuring the Performance of Process Synchronization with the Model-Driven Approach

Concurrent and parallel computing can be used to effectively speed up computations. However, the overall performance gain depends on the way how the concurrency has been introduced to the program. Considering process synchronization as one of the most important aspects in concurrent programming, this paper describes the possibility of application of the model-driven approach in the concept of a software framework for measuring the performance of process synchronization algorithms. Such framework could help determine and analyze the features of specific synchronization techniques in different cases, depending on different input parameters.

Vladislav Nazaruk, Pavel Rusakov
Backmatter
Metadaten
Titel
Information and Software Technologies
herausgegeben von
Tomas Skersys
Rimantas Butleris
Rita Butkiene
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-41947-8
Print ISBN
978-3-642-41946-1
DOI
https://doi.org/10.1007/978-3-642-41947-8