Skip to main content
main-content

Über dieses Buch

This three-volume set of books presents advances in the development of concepts and techniques in the area of new technologies and contemporary information system architectures. It guides readers through solving specific research and analytical problems to obtain useful knowledge and business value from the data. Each chapter provides an analysis of a specific technical problem, followed by the numerical analysis, simulation and implementation of the solution to the problem. The books constitute the refereed proceedings of the 2017 38th International Conference “Information Systems Architecture and Technology,” or ISAT 2017, held on September 17–19, 2017 in Szklarska Poręba, Poland. The conference was organized by the Computer Science and Management Systems Departments, Faculty of Computer Science and Management, Wroclaw University of Technology, Poland. The papers have been organized into topical parts: Part I— includes discourses on topics including, but not limited to, Artificial Intelligence Methods, Knowledge Discovery and Data Mining, Big Data, Knowledge Discovery and Data Mining, Knowledge Based Management, Internet of Things, Cloud Computing and High Performance Computing, Distributed Computer Systems, Content Delivery Networks, and Service Oriented Computing. Part II—addresses topics including, but not limited to, System Modelling for Control, Recognition and Decision Support, Mathematical Modelling in Computer System Design, Service Oriented Systems and Cloud Computing and Complex Process Modeling. Part III—deals with topics including, but not limited to, Modeling of Manufacturing Processes, Modeling an Investment Decision Process, Management of Innovation, Management of Organization.

Inhaltsverzeichnis

Frontmatter

Artificial Intelligence Methods

Frontmatter

A Deep Learning Approach for Valve Defect Recognition in Heart Acoustic Signal

The analysis of phonocardiogram (PCG), although considered as well established in a clinical application, still constitutes the valuable source of diagnostic data. Currently, electronic auscultation provides digital signals which can be processed in order to automatically evaluate the condition of heart or lungs. In this paper, we propose a novel approach for the classification of phonocardiographic signals. We extracted a set of time-frequency parameters which enable to effectively differentiate between normal and abnormal heart beats (with valve defects). These features have constituted an input of the convolutional neural network, which we used for classification of pathological signals. The Aalborg University heart sounds database from PhysioNet/Computing in Cardiology Challenge 2016 was used for verification of developed algorithms. We obtained 99.1% sensitivity and 91.6% specificity on the test data, which is motivational for further research.

Dariusz Kucharski, Dominik Grochala, Marcin Kajor, Eliasz Kańtoch

Population-Based Algorithm with Selectable Evolutionary Operators for Nonlinear Modeling

In this paper a new population-based algorithm for nonlinear modeling is proposed. Its advantage is the automatic selection of evolutionary operators and their parameters for individuals in population. In this approach evolutionary operators are selected from a large set of operators, however only the solutions that use low number of operators are promoted in population. Moreover, assigned operators can be changed during evolution of population. Such approach: (a) eliminates the need for determining detailed mechanism of the population-based algorithm, and (b) reduces the complexity of the algorithm. For the simulations typical nonlinear modeling benchmarks are used.

Krystian Łapa

Evaluation of Gated Recurrent Neural Networks in Music Classification Tasks

In this paper, we evaluate two popular Recurrent Neural Network (RNN) architectures employing the mechanism of gating: Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU), in music classification tasks. We examine the performance on four datasets concerning genre, emotion and dance style recognition. Our key result is a significant improvement of classification accuracy achieved by training the recurrent network on random short subsequences of the vector sequences in the training set. We examine the effect of this training approach on both architectures and discuss the implications for the potential use of RNN in music information retrieval.

Jan Jakubik

Neuro-Fuzzy System for Medium-Term Electric Energy Demand Forecasting

Medium-term electric energy demand forecasting plays an important role in power system planning and operation as well as for negotiation forward contracts. This paper proposes a solution to medium-term energy demand forecasting that covers definition of input and output variables and the forecasting model based on a neuro-fuzzy system. As predictors patterns of the yearly periods of the time series are defined, which unify input data and filter out the trend. Output variable is encoded in tree ways using coding variables describing the process. For prediction of coding variables, which are necessary for postprocessing, ARIMA and exponential smoothing models are applied. The simplified relationship between preprocessed input and output variables is modeled using Adaptive-Network-Based Fuzzy Inference System. As an illustration, we apply the proposed time series forecasting methodology to historical monthly energy demand data in four European countries and compare its performance to that of alternative models such as ARIMA, exponential smoothing and kernel regression. The results are encouraging and confirm the high accuracy of the model and its competitiveness compared to other forecasting models.

Paweł Pełka, Grzegorz Dudek

An Evolutionary Optimization Method Based on Scalarization for Multi-objective Problems

In this paper, we perform some computational experiments on the new global scalarization method for multi-objective optimization problems. Its main idea is to construct, for a given multi-objective optimization problem, a global scalarization function whose values are non-negative real numbers. The points where the scalarization function attains the zero value are exactly weak Pareto stationary points for the original multi-objective problem. We apply two different evolutionary algorithms to minimize the scalarization function; both of them are designed for solving scalar optimization problems. The first one is the classical Genetic Algorithm (GA). The second one is a new algorithm called Dissimilarity and Similarity of Chromosomes (DSC), which has been designed by the authors. The computational results presented in this paper show that the DSC algorithm can find more minimizers of the scalarization function than the classical GA.

Marcin Studniarski, Radhwan Al-Jawadi, Aisha Younus

Measuring Cognitive Workload in Arithmetic Tasks Based on Response Time and EEG Features

The aim of the present paper is to verify whether the cognitive load can be evaluated through the analysis of the examined person’s response time and extracted EEG signal features. The research was based on an experiment consisting of six intervals ensuring various cognitive load level of arithmetic tasks. The paper describes in details the analysis process including signal pre-processing with artifact correction, feature extraction and outlier detection. Statistical verification of EEG band differences, response time and error rate in intervals was realised. Statistical correlations were found between EEG features and response time, however, the correlation strength increased inside the groups of intervals of similar cognitive workload level. Evoked related potentials were also analysed and their results confirmed the statistical outcomes.

Małgorzata Plechawska-Wójcik, Magdalena Borys, Mikhail Tokovarov, Monika Kaczorowska

A Method for Genetic Selection of the Dynamic Signature Global Features’ Subset

Dynamic signature is a biometric attribute which can be used to perform verification of the identity. It consists of waveforms describing dynamics of a signing process. The waveforms are acquired using a digital input device, e.g. graphic tablet or touchscreen. During verification process the signature is usually represented by descriptors, which can be so-called global features. In this paper, we propose a new genetic approach to select a specified number of the most characteristic global features for each signer, which are used in the identity verification process. Proposed method was tested using known dynamic signatures database - MCYT-100.

Marcin Zalasiński, Krzysztof Cpałka

Knowledge Discovery and Data Mining

Frontmatter

Multivariate Regression Tree for Pattern-Based Forecasting Time Series with Multiple Seasonal Cycles

Multivariate regression tree methodology is used for forecasting time series with multiple seasonal cycles. Unlike typical regression trees, which generate only one output, multivariate approach generates many outputs in the same time, which represent the forecasts for subsequent time-points. In the proposed approach a time series is represented by patterns of seasonal cycles, which simplifies the forecasting problem and allows the forecasting model to capture multiple seasonal cycles, trend and nonstationarity. In application example the proposed model is applied to forecasting electrical load of power system. Its performance is compared with some alternative models such as CART, ARIMA and exponential smoothing. Application examples confirm good properties of the model and its high accuracy.

Grzegorz Dudek

Active Protocol Discoverer Based on Grammatical Evolution

The paper presents a proposition of a system of protocol discovering (Protocol Discoverer) developed on the basis of Grammatical Evolution techniques. Unlike numerous other solutions based solely on observing messages between participants of a conversation, our Protocol Discoverer is an active participant which generates messages and sends them to the system for which the protocol is to be identified. This solution allows not only for identifying typical behaviors of participants within a protocol, but also for finding anomalous behaviors (the ones which normally do not occur between participants using a defined protocol).In order to generate the description of a protocol in the form of a context-free grammar, the solution presented in the article is based on the evolutionary approach using Grammatical Evolution by Grammatical Evolution. Universal grammar is the basis for creating evolutionary solution grammars which describe particular pairs of requests-responses appearing in the protocol.

Dariusz Pałka, Marek Zachara, Krzysztof Wójcik

Speaker Diarization Using Deep Recurrent Convolutional Neural Networks for Speaker Embeddings

In this paper we propose a new method of speaker diarization that employs a deep learning architecture to learn speaker embeddings. In contrast to the traditional approaches that build their speaker embeddings using manually hand-crafted spectral features, we propose to train for this purpose a recurrent convolutional neural network applied directly on magnitude spectrograms. To compare our approach with the state of the art, we collect and release for the public an additional dataset of over 6 h of fully annotated broadcast material. The results of our evaluation on the new dataset and three other benchmark datasets show that our proposed method significantly outperforms the competitors and reduces diarization error rate by a large margin of over 30% with respect to the baseline.

Pawel Cyrta, Tomasz Trzciński, Wojciech Stokowiec

Classification Tree for Material Defect Detection Using Active Thermography

Active thermography is a highly efficient and powerful technique that enables us to detect the subsurface defects by heating the investigated material sample and recording the thermal response using an infrared camera. In this work a simple variant of the time-resolved infrared radiometry method was used. The study was conducted for a sample made of the low thermal diffusivity material with artificially produced aerial defects. As a result of experiment, the sequence of thermograms was obtained. Heating and cooling curves for each thermogram pixel were determined and treated as patterns describing local features of the material. These patterns are recognized by classification tree and classified into two categories: “defect” or “non-defect”. Advantages of classification tree is an automatic feature selection and strong reduction of the pattern dimensionality. On the basis of simulation study, it can be concluded that classification tree is a useful tool for the characterisation and detection of material defects.

Grzegorz Dudek, Sebastian Dudzik

Interactive Visualization of Query Results Set from Information Retrieval Using Concept Lattices

This paper is devoted to the interactive visualization of search results obtained by the search engine using the concept lattices. We provide a tool in which the process is realized from the query input, the creation of the concept lattice and then its visualization. The concept lattices are created using Formal Concept Analysis which hierarchically organizes the results in the form of clusters of particular objects composed of the documents with the shared attributes. The resulted concept lattice is able to provide a structured view on the domain of query to the user. This work uses Generalized One-Sided Concept Lattice (GOSCL) for building a hierarchy of concepts. This model is able to create concept lattice from input data tables that contain different types of attributes representing fuzzy sets. Thus, the concept lattice is then shown as an interactive graph on which reductions can be applied, increasing the clarity of the output visualization.

Peter Butka, Miroslav Smatana, Veronika Novotná

PID-Fuzzy Controllers with Dynamic Structure and Evolutionary Method for Their Construction

The controllers are interesting type of information systems. Their structure and parameters for atypical applications usually are selected by trial and error method, which can be time-consuming. In this paper a controller structure, which is an ensemble of PID controller and fuzzy system, and an automatic evolutionary method for its construction is presented. The significant feature of proposed method is that the controller elements, that increase its complexity but do not improve the controller precision in the sense of the adopted evaluation function, can be reduced. Moreover, this method allows to use the knowledge of the controlled object. A typical control problem has been used to test the authors approach.

Krystian Łapa, Krzysztof Cpałka

Community Detection in Bibsonomy Using Data Clustering

Community detection aims to extract the related groups of nodes from complex networks, by exploiting the network topology. Different approaches have been proposed for community detection, where most of them are based on clustering algorithms. In this paper we investigate how we can use the clustering for the community detection in the academic social bookmarking website: Bibsonomy. Our goal is to determine the most suitable clustering algorithm for similar user detection in Bibsonomy. To realize that, we have compared three clustering algorithms: The k-means, the k-medoids and the Agglomerative clustering algorithms. Experimental results demonstrate that k-means performs better than the other algorithms, for community detection in Bibsonomy.

Zakaria Saoud, Jan Platoš

Big Data, Knowledge Discovery and Data Mining, Knowledge Based Management

Frontmatter

Using Predictive Data Mining Models for Data Analysis in a Logistics Company

The aim of this paper is to apply predictive data mining (DM) techniques in order to predict the average fuel consumption for trucks and drivers resp., to identify the key factors that affect fuel consumption of vehicles and also to identify best practices and driving styles of drivers. For this purpose different models have been proposed to provide an overview of the key factors affecting fuel consumption for individual vehicles and their drivers. Predictive models enabled us to identify main influencing factors and provide recommendations for a logistics company to reduce the fuel consumption. Data were collected from Dynafleet information system of a small transport company. The company is dealing with freight traffic, particularly trucks. We first describe selected projects dealing with similar tasks in this area. Next, we explore and analyze data using CRISP-DM methodology by appropriate methods designed for data mining and then evaluate the results of the experiments.

Miroslava Muchová, Ján Paralič, Michael Nemčík

Twitter Sentiment Analysis Using a Modified Naïve Bayes Algorithm

Microblogging has emerged as a popular platform and a powerful communication tool among people nowadays. A clear majority of people share their opinions about various aspects of their lives online every day. Thus, microblogging websites offer rich sources of data in order to perform sentiment analysis and opinion mining. Because microblogging has emerged relatively recently there are only some research works which are devoted to this field. In this paper, the focus is on performing the task of sentiment analysis using Twitter which is one of the most popular microblogging platforms. Twitter is a very popular microblogging site where its users write status messages called tweets to express themselves. These status updates mostly express their opinions about various topics. The objective of this paper is to build a system that can classify these Twitter status updates as positive, negative, or neutral with respect to any query term thereby giving an idea about the overall sentiment of the people towards that topic. This type of sentiment analysis is useful for advertisers, consumers researching a service or product, companies, governments, marketers, or any organization who are researching public opinion.

Manav Masrani, Poornalatha G.

Cost-Sensitive Feature Selection for Class Imbalance Problem

The class imbalance problem is encountered in real-world applications of machine learning and results in suboptimal performance during data classification. This is especially true when data is not only imbalanced but also high dimensional. The class imbalance is very often accompanied by a high dimensionality of datasets and in such a case these problems should be considered together. Traditional feature selection methods usually assign the same weighting to samples from different classes when the samples are used to evaluate each feature. Therefore, they do not work good enough with imbalanced data. In situation when the costs of misclassification of different classes are diverse, cost-sensitive learning methods are often applied. These methods are usually used in the classification phase, but we propose to take the cost factors into consideration during the feature selection. In this study we analyse whether the use of cost-sensitive feature selection followed by resampling can give good results for mentioned problems. To evaluate tested methods three imbalanced and multidimensional datasets are considered and the performance of chosen feature selection methods and classifiers are analysed.

Małgorzata Bach, Aleksandra Werner

Constraint-Based Method for Mining Colossal Patterns in High Dimensional Databases

Constraint-based methods for mining patterns have been developed in recent years. They are based on top-down manner to prune candidate patterns. However, for colossal pattern mining, bottom-up manners are efficient methods, so the previous approaches for pruning candidate patterns based on top-down manner cannot apply to colossal pattern mining with constraint when using bottom-up manner. In this paper, we state the problem of mining colossal pattern with pattern constraints. Next, we develop a theorem for efficient pruning candidate patterns with bottom-up manner. Finally, we propose an efficient algorithm for mining colossal patterns with pattern constraints based on this theorem.

Thanh-Long Nguyen, Bay Vo, Bao Huynh, Vaclav Snasel, Loan T. T. Nguyen

A Dynamic Packed Approach for Analytic Data Warehouse in Ad-Hoc Queries

Brighthouse is a column-oriented data warehouse that supports the compressed databases as well as analytic querying. For the faster query processing, Brighthouse creates packages from the data rows. While the query is resolving, it decompresses only those packages that partially satisfy the condition of the query to avoid accessing all the database. However, Brighthouse used a constant parameter to create packages, this may create incompact packages and lead to large number of packages that are processed in each query. In this paper, at first, we define the task of partitioning data table into blocks as an optimization problem, then discuss the time complexity of the problem and propose an efficient algorithm, which creates dynamically data packages for efficient queries in databases. The experimental results shown the advantage of the proposed approach in package range reduction.

Loan T. T. Nguyen, Hung Son Nguyen, Sinh Hoa Nguyen

Internet of Things, Cloud Computing and High Performance Computing, Distributed Computer Systems, Content Delivery Networks

Frontmatter

How Is Server Software Configured? Examining the Structure of Configuration Files

Contemporary software on servers is usually configured through editing configuration files. Surprisingly, to the extent of our knowledge, the diversity of configuration file formats remains unstudied. Our goal in this work is to determine what data structures are used in configuration files and what formats are being used to encode them, in order to have a foundation for semantic analysis of their contents in the future. We examine 14133 files in 3409 packages comprising the whole set of software available in Debian stable repositories that has configuration files in the /etc directory. After eliminating files that are not configuration (such as init scripts), we assign them to categories using various criteria, some of them being the data structure they express or whether the order of statements matter. In this examination we find that even custom configuration formats can usually be expressed using one of several commonly used data structures. Some software packages, however, are configured in a Turing-complete programming language, usually the same one that the configured program was written in, and are therefore unsuitable for static analysis. Regardless, we provide a taxonomy of configuration formats, and highlight common themes in custom formats used by various packages. Ultimately, we describe a data structure that is able to hold information about all of these configuration files for further analysis.

Błażej Święcicki, Leszek Borzemski

Risk-Based Decision Making in IoT Systems

We propose a general framework for systems of the Internet of Things based on ideas of reliability trust, transaction gain, and risk. The major contribution of this work is the integration of the reliability trust estimated at the perception layer and risk estimation based on global control information provided by the cloud layer. We propose establishing a decision-making process on a random variable-based gain-loss model and methods of choice under risk. The proposed approach is general – it can be used for a wide range of trust-based decisions undertaken in the perception layer, such as routing, sensor selection, data aggregation, intrusion detection, and others.

Grzegorz J. Blinowski

On Implementation of Energy-Aware MPTCP Scheduler

The majority of networking devices currently available are equipped with a few communication interfaces. However, in most cases, only one of them is used for data transfer as dictated by the design premises of the fundamental transport protocol – TCP. Recently, a variant of TCP – Multipath TCP (MPTCP) – that allows for simultaneous transmission over different paths has been developed. While it allows for spreading the data stream among different interfaces and paths, the way it is actually performed has been left for further investigation. The stream splitting can be optimized according to different quality indicators. The paper discusses the implementation properties of an energy-aware load balancing method tested in a physical networking environment. A comparison with the reference solution is also provided.

Michał Morawski, Przemysław Ignaciuk

Mobile Monitoring System for Environment Parameters

Problems related to the monitoring of environmental parameters, and in particular air-pollution in urban agglomerations, are some of the more publicly discussed public issues in recent times. This paper presents the concept, functionality, main hardware and software components, and test results of low-cost prototype of the system, enabling monitoring of environmental parameters such as air-pollution, temperature, humidity, pressure and UV radiation. The main features of this solution are its low cost, relatively high precision of measurement and the ability to operate in mobile version, without wired power supply and communication, including GPS coordinates as one of the measured parameters. The prototype can be an alternative to expensive professional solutions and a base for developing more advanced sensor networks. For home use it can be used as a home weather station with an air pollution sensor, being an extension of the of Personal Area Network of the recently popular Internet of Things solutions.

Gerard Żmuda, Andrzej Opaliński, Mirosław Głowacki

Swarm Based System for Management of Containerized Microservices in a Cloud Consisting of Heterogeneous Servers

Centralized and hybrid container orchestration systems are a bottleneck for the scalability of cloud applications. Due to data replication costs the cluster can consist only of a handful of servers. A decentralized peer-to-peer systems are needed. We propose such a system whose architecture is the same as the microservice architecture of the cloud application it manages. It can potentially offer performance improvements with respect to the existing centralized container orchestration systems.

Waldemar Karwowski, Marian Rusek, Grzegorz Dwornicki, Arkadiusz Orłowski

Execution Management of Computational Services in Service-Oriented Systems

The aim of this work is to propose effective solution to eliminate the problem of excessive resource locking by idle computational services. In order to achieve this, a dedicated execution scheme and software tools were developed and tested in several scenarios. The results show that resource locking may be significantly decreased and the overall resource consumption in service system can be minimized as well. The approach is dedicated for computational services, which perform operations upon data delivery, returning the results after finishing their tasks.

Paulina Kwaśnicka, Łukasz Falas, Krzysztof Juszczyszyn

Content Delivery Network and External Resources Detection for Selected Hosts

Many methods are used to reduce the time required to download web page completely. Most of them based on different techniques such as media elements and scripts compression, parallel resource download or caching. One of such techniques based on resource sharing and CDN usage. Resource sharing is an effective design for web pages with a considerable amount of media elements. In this paper, the author introduces his own software to determine, if selected web page uses CDN or external resources. In addition, questions about the correlation between web-page size, category and CDN usage are taken into account.

Vladyslav Deyneko, Ireneusz J. Jóźwiak

Parallelization of Selected Algorithms on Multi-core CPUs, a Cluster and in a Hybrid CPU+Xeon Phi Environment

In the paper we present parallel implementations as well as execution times and speed-ups of three different algorithms run in various environments such as on a workstation with multi-core CPUs and a cluster. The parallel codes, implementing the master-slave model in C+MPI, differ in computation to communication ratios. The considered problems include: a genetic algorithm with various ratios of master processing time to communication and fitness evaluation times, matrix multiplication and numerical integration. We present how the codes scale in the aforementioned systems. For the numerical integration code that scales very well we also show performance in a hybrid CPU+Xeon Phi environment.

Adam Krzywaniak, Paweł Czarnul

E-Business Systems, Web Design, Mobile and Multimedia Systems

Frontmatter

Verification of Web Traffic Burstiness and Self-similarity for Multiple Online Stores

Developing realistic Web traffic models is essential for a reliable Web server performance evaluation. Very significant Web traffic properties that have been identified so far include burstiness and self-similarity. Very few relevant studies have been devoted to e-commerce traffic, however. In this paper, we investigate burstiness and self-similarity factors for seven different online stores using their access log data. Our findings show that both features are present in all the analyzed e-commerce datasets. Furthermore, a strong correlation of the Hurst parameter with the average request arrival rate was discovered (0.94). Estimates of the Hurst parameter for the Web traffic in the online stores range from 0.6 for low traffic to 0.85 for heavy traffic.

Grażyna Suchacka, Alicja Dembczak

Frequent Scene Change in Wavelet Video Compression

Wavelet-based coders are noteworthy alternative to popular hybrid coders due to natural feature of scalability. By using a scalable wavelet coder it is possible to provide a video content for a wide range of customers who are using networks with different bitrates and devices with different screen sizes. In the paper results of experimental research of compression efficiency in the case of a frequently scene change was presented. For the need of experiments a new parameter SC was defined. The parameter indicates how often the scene change is performed. The scene change was simulated by changing the order of frames in video sequences. Effectiveness of video compression has been estimated by performing the series of experiments with a set of suitable prepared video sequences. Quality measurements were performed for the luminance component by calculating the mean value of the PSNR for all pictures in a given sequence. Table 1 presents the experiment results as PSNR values for SC parameter varies from 32 to 0 for various bitrates. The results show differences of the compression effectiveness depending on a frequency of scene change.

Andrzej Popławski

A Versatile Hardware and Software Toolset for Computer Aided Inspection Planning of Machine Vision Applications

The digital workflow and the set of tools described in this article support machine vision engineering during specification, design and implementation in numerous ways. In addition to the core idea of cad based vision system development, a generic data container format, simulation tools for real-time and photorealistic rendering using measured material BRDF data, and a MATLAB/ROS-controlled two-lightweight robot setup for experimental verification of simulation results are presented as parts of a semi-automatic planning process for a laser triangulation system. The planning process includes the annotation of CAD data with GD&T tolerance information using the ISO/PWI 10303-238 (STEP-NC) standard, initial system configuration by a machine vision expert, image formation simulation, system performance evaluation based on metrics applied on synthetic images and the capturing of real images with camera and laser light source, each mounted to a 7-axis KUKA LWR IV lightweight robot. The economic benefits of time and cost reduction of machine vision system planning are discussed as well as drawbacks and limits of the suggested workflow and tools.

Stephan Irgenfried, Heinz Wörn, Stephan Bergmann, Mahsa Mohammadikajii, Jürgen Beyerer, Carsten Dachsbacher

The Impact of Web Pages’ Load Time on the Conversion Rate of an E-Commerce Platform

The conversion rate is a key determinant of effectiveness of an e-commerce platform. The impact of web pages’ load time on conversion rate of an e-commerce platform was studied. During the experiment, the behavior of real store’s users on the Magento platform was monitored. The popular Google Analytics reporting tool was used to collect data. The level of acceptance of page load times has been determined by the Apdex index. The results of the research showed that average page loading times have a direct impact on the e-commerce conversion rate and customer satisfaction. It was found that the use of Apdex index in different countries requires a recalculation of limits of tolerance and satisfaction.

Wiktor Stadnik, Ziemowit Nowak

Nearest Neighborhood Determination Methods with Regard to Mobile Location-Based Application

The paper concerns nearest neighborhood determination issues. The main goal is to investigate and compare characteristics of popular solutions with regard to applying them to location-based services for mobile devices. Mainly the impact of number of objects and object’s mobility on performance is considered. Also the accuracy dependent on search range was tested. The paper takes into account three groups of approaches: brute-force based solutions, spatial tree structures, and solutions that use Geohash for location encoding. Described simulation experiments was performed for eight popular algorithms and one own approach. The results show very diverse properties of all algorithms and its usefulness for considered area of application.

Mariusz Fraś, Piotr Frącek, Krzysztof Waśko

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise