Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 32nd International Symposium on Computer and Information Sciences, ISCIS 2018, held in Poznan, Poland, in September 2018.

The 29 revised full papers presented were carefully reviewed and selected from 64 submissions. The papers are dealing with the following topics: smart algorithms; data classification and processing; stochastic modelling; performance evaluation; queuing systems; wireless networks and security; image processing and computer vision.



Systems and Networks


Tandem Networks with Intermittent Energy

Energy harvesting may be needed to operate digital devices in locations where connecting them to the power grid and changing batteries is difficult. However, energy harvesting is often intermittent resulting in a random flow of energy into the device. It is then necessary to analyse systems where both the workload, and the energy supply, must be represented by random processes. Thus, in this paper, we consider a multi-hop tandem network where each hop receives energy locally in a random process, and packets arrive at each of the nodes and then flow through the multi-hop connection to the sink. We present a product-form solution for this N-hop tandem network when both energy is represented by discrete entities, and data is in the form of discrete packets.
Yasin Murat Kadioglu

Adaptive Allocation of Multi-class Tasks in the Cloud

Cloud computing enables the accommodation of an increasing number of applications in shared infrastructures. The routing for the incoming jobs in the cloud has become a real challenge due to the heterogeneity in both workload and machine hardware and the changes of load conditions over time. The present paper design and investigate the adaptive dynamic allocation algorithms that take decisions based on on-line and up-to-date measurements, and make fast online decisions to achieve both desirable QoS levels and high resource utilization. The Task allocation platform (TAP) is implemented as a practical system to accommodate the allocation algorithms and perform online measurement. The paper studies the potential of our proposed algorithms to deal with multi-class tasks in heterogeneous cloud environments and the experimental evaluations are also presented.
Lan Wang

The Mapping Between TFD and IEEE 802.11 User Priorities and Access Categories

The modern approach to the quality of service (QoS) claims that problems of assurance of the required QoS parameters may occur not only in core networks (as the classic approach assumes), but also in access networks (including local networks, or LANs), especially if the access network is of broadcast type, such as the IEEE 802.11 wireless LAN. In the case of broadcast LANs, QoS assurance in the network layer should be associated with the QoS assurance in the data link layer. The aim of this paper is to propose such an association between the 802.11 QoS assurance and the QoS assurance based on the Traffic Flow Description (TFD) option of the IP protocol. The TFD was specified by the Authors in the IETF’s working document (Internet Draft), and was intended to describe Internet traffic for the purpose of dynamic resource reservations. The TFD-based QoS assurance offers dynamic reservations for micro-flows (single data real-time streams or non-real-time flows). The proposed method associates TFD and 802.11 QoS architectures in a way similar to the association of the DiffServ architecture and the 802.11 QoS, i.e. by the mapping of corresponding signalling. This mapping was designed and then implemented by the Authors in a Linux kernel. Results of laboratory experiments carried out in a test network show that the proposed mappings sufficiently promote traffic described by the TFD option in 802.11 WLAN and protect it against degradation. The proposed method includes the mapping of static signalling. Mapping of dynamic information conveyed in the TFD option is also possible and will be a subject of further research. The mapping between TFD and IEEE 802.11 user priorities and access categories will allow a 802.11 network to preserve TFD-based QoS in wireless links.
Robert R. Chodorek, Agnieszka Chodorek

Design of a Multidomain IMS/NGN Service Stratum

The paper continues our research concerning the Next Generation Network (NGN), which is standardized for delivering multimedia services with strict quality and includes elements of the IP Multimedia Subsystem (IMS). A design algorithm for a multidomain IMS/NGN service stratum is proposed, which calculates the necessary CSCF servers CPU message processing times and link bandwidths with respect to the given maximum values of mean call set-up delays E(CSD) and mean call disengagement delays E(CDD) for all types of successful call scenarios. These delays are standardized call processing performance parameters, which are very important for satisfaction of users and overall success of the IMS/NGN concept. In the paper a block diagram of the algorithm and details regarding its operation are described. Using the implemented algorithm several multidomain IMS/NGN service stratum designs are performed for different data sets and their results are discussed.
Sylwester Kaczmarek, Maciej Sac

Performance Evaluation


Dynamic Capping of Physical Register Files in Simultaneous Multi-threading Processors for Performance

Today, Simultaneous Multi-Threading (SMT) processors allow sharing of many datapath elements among applications. This type of resource sharing helps keeping the area requirement of a SMT processor at a very modest size. However, a major performance problem arises due to resource conflicts when multiple threads race for the same shared resource. In an earlier study, the authors propose capping of a shared resource, Physical Register File (PRF), for improving processor performance by giving less PRF entries, and, hence, spending less power, as well. For the sake of simplicity, the authors propose a fix PRF-capping amount, which they claim to be sufficient for all workload combinations. However, we show that a fix PRF-capping strategy may not always give the optimum performance, since any thread’s behavior may change at any time during execution. In this study, we extend that earlier work with an adaptive PRF-capping mechanism, which tracks down the behavior of all running threads and move the cap value to a near-optimal position by the help of a hill-climbing algorithm. As a result, we show that we can achieve up to 21% performance improvement over the fix capping method, giving 7.2% better performance, on the average, in a 4-threaded system.
Hasancan Güngörer, Gürhan Küçük

Minimizing Latency in Wireless Sensor Networks with Multiple Mobile Base Stations

Among the main issues in wireless sensor networks is the minimization of response time for the delay sensitive applications. Such applications require a fast response time because of their emergency nature; we can site for example battlefield surveillance and health monitoring applications. Despite the importance of latency minimization, we can’t forget energy conservation which is almost always present in many works on wireless sensor networks. In this paper we study the impact of the utilization of multiple mobile base stations on the latency minimization and network lifetime elongation which make a twin gain. Simulation results show the efficiency of the used technique in comparison with the utilization of one base station or multiple fixed base stations.
Mehdi Achour, Jamel Belhadj Taher

Solving Large Markov Models Described with Standard Programming Language

Continuous time Markov chains (CTMC) are one of the formalisms for building models. This paper discusses OLYMP2 - a system for solving big CTMC models (exceeding \(10^9\) states), described with a standard programming language - Java. OLYMP2 is primarily aimed at modelling of computer networks, so its formalism comes from networking concepts, like queueing systems. Using Java as a model description allows for greater flexibility in comparison to model-checker specific languages that often do not employ complete features of an object-oriented programming. Using Java also makes the parsing of models relatively fast, due to optimised Java run-time environment. Introducing dedicated compression of transition matrices allows for keeping memory usage at reasonable level even for large models.
P. Pecka, M. P. Nowak, A. Rataj, S. Nowak

Performance of a Buffer Between Electronic and All-Optical Networks, Diffusion Approximation Model

We present a model of the edge router between electronic and all optical networks. Arriving electronic packets of variable sizes are stored at a buffer the volume of which is equal to the fixed size of optical packet. When the buffer is filled, its content becomes an optical buffer and dispatched to optical network. To avoid excessive delays, the optical packet is sent also after a specified deadline. The model is based on diffusion approximation and validated by discrete event simulation. Its goal is to determine the probability distribution of the effective optical packet sizes an distribution of their interdeparture times as a function of the interarrival time distribution, distribution of the electronic packet sizes, the value of deadline and the size of buffer. We use real traffic data from the CAIDA (Center for Applied Internet Data Analysis) repositories.
Godlove Suila Kuaban, Edelqueen Anyam, Tadeusz Czachórski, Artur Rataj

The Influence of the Traffic Self-similarity on the Choice of the Non-integer Order PI Controller Parameters

The article discusses the problem of choosing the best parameters of the non-integer order \(PI^{\alpha }\) controller used in IP routers Active Queue Management for TCP/IP traffic flow control. The impact of the self-similarity of the traffic on the controller parameters is investigated with the use of discrete event simulation. We analyze the influence of these parameters on the length of the queue, the number of rejected packets and waiting times. The results indicate that the controller parameters strongly depend on the value of the Hurst parameter.
Adam Domański, Joanna Domańska, Tadeusz Czachórski, Jerzy Klamka, Dariusz Marek, Jakub Szyguła

Data Analysis and Algorithms


Modified Graph-Based Algorithm for Efficient Hyperspectral Feature Extraction

Since the Laplacian Eigenmaps (LE) algorithm suffers from a spectral uncertainty problem for the adjacency weighted matrix construction, it may not be adequate for the hyperspectral dimension reduction (DR), classification or detection process. Moreover, only local neighboring data point’s properties are conserved in the LE method. To resolve these limitations, an improved feature extraction technique called modified Laplacian Eigenmaps (MLE) for hyperspectral images is suggested in this paper. The proposed approach determines the similarity between pixel and endmember for the purpose of building a more precise weighted matrix. Then, based on the obtained weighted matrix, the DR data are derived as the Laplacian eigenvectors of the Laplacian matrix, constructed from the weighted matrix. Furthermore, the novel proposed approach focuses on maximizing the distance between no nearby neighboring points, which raises the separability among ground objects. Compared to the original LE method, experiment results, for hyperspectral images classification and detection tasks, have proved an enhanced accuracy.
Asma Fejjari, Karim Saheb Ettabaa, Ouajdi Korbaa

The Competitiveness of Randomized Strategies for Canadians via Systems of Linear Inequalities

The Canadian Traveller Problem is a PSPACE-complete optimization problem where a traveller traverses an undirected weighted graph G from source s to target t where some edges \(E_*\) are blocked. At the beginning, the traveller does not know which edges are blocked. He discovers them when arriving at one of their endpoints. The objective is to minimize the distance traversed to reach t.
Westphal proved that no randomized strategy has a competitive ratio smaller than \(\left| E_* \right| +1\). We show, using linear algebra techniques, that this bound cannot be attained, especially on a specific class of graphs: apex trees. Indeed, no randomized strategy can be \(\left( \left| E_* \right| +1\right) \)-competitive, even on apex trees with only three simple \((s,t)\)-paths.
Pierre Bergé, Julien Hemery, Arpad Rimmel, Joanna Tomasik

Modelling and Designing Spatial and Temporal Big Data for Analytics

The main purpose of this paper is to introduce a new approach with a new data model and architecture that supports spatial and temporal data analytics for meteorological big data applications. The architecture is designed with the recent advances in the field of spatial data warehousing (SDW) and spatial and temporal big data analytics. Measured meteorological data is stored in a big database (NoSQL database) and analyzed using Hadoop big data environment. SDW provides a structured approach for manipulating, analyzing and visualizing the huge volume of data. Therefore, the main focus of our study is to design a Spatial OLAP-based system to visualize the results of big data analytics for daily measured meteorological data by using the characteristic features of Spatial Online Analytical Processing (SOLAP), SDW, and the big data environment (Apache Hadoop). In this study we use daily collected real meteorological data from various stations distributed over the regions. Thus, we enable to do spatial and temporal data analytics by employing spatial data-mining tasks including spatial classification and prediction, spatial association rule mining, and spatial cluster analysis. Furthermore, a fuzzy logic extension for data analytics is injected to the big data environment.
Sinan Keskin, Adnan Yazıcı

Adaptive, Hubness-Aware Nearest Neighbour Classifier with Application to Hyperspectral Data

We present an extension of the Nearest Neighbour classifier that can adapt to sample imbalances in local regions of the dataset. Our approach uses the hubness statistic as a measure of a relation between new samples and the existing training set. This allows to estimate the upper limit of neighbours that vote for the label of the new instance. This estimation improves the classifier performance in situations where some classes are locally under-represented. The main focus of our method is to solve the problem of local undersampling that exists in hyperspectral data classification. Using several well-known Machine Learning and hyperspectral datasets, we show that our approach outperforms standard and distance-weighted kNN, especially for high values of k.
Michał Romaszewski, Przemysław Głomb, Michał Cholewa

Graph Representation and Semi-clustering Approach for Label Space Reduction in Multi-label Classification of Documents

An increasing number of large online text repositories require effective techniques of document classification. In many cases, more than one class label should be assigned to documents. When the number of labels is big, it is difficult to obtain required multi-label classification accuracy. Efficient label space dimension reduction may significantly improve classification performance. In the paper, we consider applying graph-based semi-clustering algorithm, where documents are represented by vertices with edge weights calculated according to the similarity of associated texts. Semi-clusters are used for finding patterns of labels that occur together. Such approach enables reducing label dimensionality. The performance of the method is examined by experiments conducted on real medical documents. The assessment of classification results, in terms of Classification Accuracy, F-Measure and Hamming Loss, obtained for the most popular multi-label classifiers: Binary Relevance, Classifier Chains and Label Powerset showed good potential of the proposed methodology.
Rafał Woźniak, Danuta Zakrzewska

Online Principal Component Analysis for Evolving Data Streams

We consider an online version of the Principal Component Analysis (PCA), where the goal is to keep track of a subspace of small dimension which captures most of the variance of the data arriving sequentially in a stream. We assume the data stream is evolving and hence the target subspace is changing over time. We cast this problem as a prediction problem, where the goal is to minimize the total compression loss on the data sequence. We review the most popular methods for online PCA and show that the state-of-the-art IPCA algorithm is unable to track the best subspace in this setting. We then propose two modifications of this algorithm, and show that they exhibit a much better predictive performance than the original version of IPCA. Our algorithms are compared against other popular method for online PCA in a computational experiment on real data sets from computer vision.
Monika Grabowska, Wojciech Kotłowski

Security in Cyber and Physical Systems


Intrusion Detection with Comparative Analysis of Supervised Learning Techniques and Fisher Score Feature Selection Algorithm

Rapid development of technologies not only makes life easier, but also reveals a lot of security problems. Developing and changing of attack types affect many people, organizations, companies etc. Therefore, intrusion detection systems have been developed to avoid financial and emotional loses. In this paper, we used CICIDS2017 dataset which consist of benign and the most cutting-edge common attacks. Best features are selected by using Fisher Score algorithm. Real world data extracted from the dataset are classified as DDoS or benign with using Support Vector Machine (SVM), K Nearest Neighbour (KNN) and Decision Tree (DT) algorithms. As a result of the study, 0,9997%, 0,5776%, 0,99% success rates were achieved respectively.
Doğukan Aksu, Serpil Üstebay, Muhammed Ali Aydin, Tülin Atmaca

A New Secure and Usable Captcha-Based Graphical Password Scheme

CaRP are known graphical password schemes using Captcha visual objects for password setting. CaRP contains four schemes with different alphabet symbols used for password specification. We generalize CaRP schemes introducing Click Symbol-Alphanumeric (CS-A) scheme which as CaRP schemes, ClickText (CT), ClickAnimal (CA), AnimalGrid (AG), and ClickPoint (CP), uses a proper symbol selection on the screen by clicking, but does not specify a particular alphabet. In particular, we show that using together in one alphabet Alphanumeric (A) and Visual (V) symbols (CS-AV) improves its usability and users are more motivated towards making strong passwords. For the security analysis, we applied segmentation techniques to identify the symbols on CT and proposed CS-AV. The segmentation and symbols identification of CS-AV and CT scheme do not reveal sensitive information. This paper also studies the usability: Experiments on both schemes show that such usability feature as memorability of CS-AV is greater by 3.75% than that of CT scheme.
Altaf Khan, Alexander G. Chefranov

An IT Tool to Support Anti-crisis Training in WAZkA System: A Case Study

In the paper, the authors presented the concept and elements of implementation of the subsystem to support the training process of operators of the anti-crisis system. It was made in the form of emulators of hazard monitoring systems for the needs of an IT system supporting the analysis of threats in the form of CBRN (Chemical, Biological, Radiation, Nuclear) contamination, forecasting their effects and alerting the population (WAZkA).
The genesis of building the emulator subsystem is the need to provide a tool to support the training process of people involved in the activities of the National System of Detection of Contamination and Alarming (KSWSiA) in Poland. However, the emulators designed in this way can also be used as a tool to support the development of emergency procedures.
Emulators were proposed as IT tools. A constructive simulation was chosen as the emulation method, but reduced to a simple, determined data delivery in accordance with the previously prepared scenario.
Wojciech Kulas, Zbigniew Tarapata

European Cybersecurity Research and the SerIoT Project

This paper briefly reviews some recent research in Cybersecurity in Europe funded by the European Commission in areas such as mobile telephony, networked health systems, the Internet of Things. We then outline the objectives of the SerIoT Project which started in 2018 to address the security needs of fields such as Smart Cities, Smart Transportation Systems, Supply Chains and Industrial Informatics.
Joanna Domańska, Mateusz Nowak, Sawomir Nowak, Tadeusz Czachórski

Machine Learning and Applications


FTScMES: A New Mutation Execution Strategy Based on Failed Tests’ Mutation Score for Fault Localization

Fault localization has been one of the most expensive activity in the whole debugging process. The spectrum-based fault localization (SBFL) is the most studied and evaluated technique. Other approach is the mutation-based fault localization technique (MBFL) that needs to execute the test suite on a large amount of mutants increasing its cost. Efforts from research community have been performed in order to achieve solutions reducing such cost and keeping a minimum quality. Current mutation execution strategies are evaluated considering artificial faults. However, recent researches show that some MBFL techniques exhibit low efficacy fault localization when evaluated on real faults. In this paper, we propose a new mutation execution strategy based on failed tests’ mutation score, called (FTScMES), aiming to increase the efficacy on fault localization reducing the cost of mutants execution. FTScMES uses only the set of failed test cases to execute mutants and bases on mutation score concept to compute the suspiciousness statements. The experiments were conducted considering 221 real faults, comparing the efficacy of localization of FTScMES against 5 baselines from the literature. We found that FTScMES outperforms the baselines reducing the cost in 90% on average with a high efficacy of ranking defective code.
André Assis Lôbo de Oliveira, Celso G. Camilo, Eduardo Noronha de Andrade Freitas, Auri Marcelo Rizzo Vincenzi

Use of Neural Networks in Q-Learning Algorithm

This article is devoted to the algorithm of training with reinforcement (reinforcement learning). This article will cover various modifications of the Q-Learning algorithm, along with its techniques, which can accelerate learning through the use of neural networks. We also talk about different ways of approximating the tables of this algorithm, consider its implementation in the code and analyze its behavior in different environments. We set the optimal parameters for its implementation, and we will evaluate its performance in two parameters: the number of necessary neural network weight corrections and quality of training.
Nataliya Boyko, Volodymyr Korkishko, Bohdan Dohnyak, Olena Vovk

The Random Neural Network with a BlockChain Configuration in Digital Documentation

This paper presents the Random Neural Network in BlockChain configuration where neurons are gradually incremented as user information increases. The additional neurons codify both the new information to be added to the “neural block” and previous neurons potential to form the “neural chain”. This configuration provides the proposed algorithm the same properties as the BlockChain: security and decentralization with the same validation process: mining the input neurons until the neural network solution is found. The Random Neural Network in BlockChain configuration is applied to a digital documentation application that avoids the use of physical papers. Experimental results are presented with optimistic conclusions; this paper provides a digital step forward to avoid physical currencies, documentation and contracts.
Will Serrano

A Reinforcement Learning Approach to Adaptive Forwarding in Named Data Networking

This paper addresses Information Centric Networks, and considers in-network caching for Named Data Networking (NDN) architectures. We depart from forwarding algorithms which primarily use links that have been selected by the routing protocol for probing and forwarding, and propose an adaptive forwarding strategy using reinforcement learning with the random neural network (NDNFS-RLRNN), to leverage the routing information and actively seek possible deliveries outside these paths in a controlled way. Our simulations show that NDNFS-RLRNN achieves more efficient delivery performance than a strategy that strictly follows the routing layer or a strategy that retrieves contents from the nearest caches by flooding requests.
Olumide Akinwande, Erol Gelenbe

Bidirectional Action Rule Learning

Action rules specify recommendations which should be followed in order to transfer objects to the desired decision class. In this paper influence of employing information contained in source and target class examples in sequential covering based action rule induction method is examined. Results show that using source class for guiding the induction process produces best results.
Paweł Matyszok, Łukasz Wróbel, Marek Sikora

Applications to Linguistics, Biology and Computer Vision


A Possibilistic Approach for Arabic Domain Terminology Extraction and Translation

This paper proposes a hybrid possibilistic approach for bilingual terminology extraction using possibility and necessity measures. On the one hand, we extract domain-relevant terms from the source language, and on the other hand, we build a co-occurrence-based translation graph, which is mined to translate terms in the target language. We compare our approach with different state-of-the art approaches. Experimental results show that the possibilistic approach reaches better results in terms of Recall, Precision and Mean Average Precision (MAP). The differences between the compared approaches show that our contribution is significant in terms of p-value.
Wiem Lahbib, Ibrahim Bounhas, Yahya Slimani

Matrix and Tensor-Based Approximation of 3D Face Animations from Low-Cost Range Sensors

Three-dimensional animation is often represented in the form of a sequence of 3D meshes, also called dynamic animation or Temporally Coherent Mesh Sequence (TCMS). Widespread availability of affordable range sensors makes capturing such data easy, however, its huge volume complicates both storage and further processing. One of the possible solutions is to approximate the data using matrix or tensor decomposition. However the quality the animation may have different impact on both approaches. In this work we use the Microsoft Kinect™ to crate sequences of human face models and compare the approximation error obtained from modelling animations using Principal component analysis (PCA) and Higher Order Singular Value Decomposition (HOSVD). We focus on distortion introduced by reconstruction of data from its truncated factorization. We show that while HOSVD may outperform PCA in terms of approximation error, it may be significantly affected by distortion in animation data.
Michał Romaszewski, Arkadiusz Sochan, Krzysztof Skabek

Enhancing Hybrid Indexing for Arabic Information Retrieval

Existent literature proposes several approaches to enhance Arabic document retrieval using different indexing units. In anterior work [1, 2], we proposed to combine multiple indexing units which improved retrieval performance. This paper develops this approach and suggests enhancing term weighting through result aggregation and pseudo-relevance feedback techniques. We compare these approaches to three baselines to enhance the previous results which showed the performance of hybrid indexing. To assess our hypothesis, we run four experimental setups based on a larger corpus with various query sets. Finally, we aim to compare all these methods using standard information retrieval metrics.
Souheila Ben Guirat, Ibrahim Bounhas, Yahya Slimani

Methods of Tooth Equator Estimation

Full automation of the designing process of an occlusal splint requires an algorithm to determine the boundary of the splint. For this purpose, the idea of tooth equator is frequently used. The task is to find the approximate level where the teeth are widest, and then cut off the shape of the splint there. The article presents methods for automatic estimation of the tooth equator, used to determine the splint boundary.
Agnieszka Anna Tomaka, Dariusz Pojda, Leszek Luchowski, Michał Tarnawski

Identification of Factors that Affect Reproducibility of Mutation Calling Methods in Data Originating from the Next-Generation Sequencing

Identification of somatic mutations, based on data from next-generation sequencing of the DNA, has become one of the fundamental research strategies in oncology, with the goal to seek mechanisms underlying the process of carcinogenesis and resistance to commonly used therapies. Despite significant advances in the development of sequencing methods and data processing algorithms, the reproducibility of experiments is relatively low and depending significantly on the methods used to identify changes in the structure of the DNA. This is mainly due to the influence of three factors: (1) high heterogeneity of tumors due to which some mutations are characteristic for a small number of cells, (2) bias associated with the process of exome isolation and (3) specificity of data pre-processing strategies.
The aim of the work was to determine the impact of these factors on the identification of somatic mutations, allowing to determine the reasons for low reproducibility in such studies.
Roman Jaksik, Krzysztof Psiuk-Maksymowicz, Andrzej Swierniak


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!