Skip to main content
main-content

Über dieses Buch

This textbook collects a series of research papers in the area of Image Processing and Communications which not only introduce a summary of current technology but also give an outlook of potential feature problems in this area.

The key objective of the book is to provide a collection of comprehensive references on some recent theoretical development as well as novel applications in image processing and communications.

The book is divided into two parts. Part I deals with image processing. A comprehensive survey of different methods of image processing, computer vision is also presented. Part II deals with the telecommunications networks and computer networks. Applications in these areas are considered.

In conclusion, the edited book comprises papers on diverse aspects of image processing and communications systems. There are theoretical aspects as well as application papers.

Inhaltsverzeichnis

Frontmatter

Image processing

Frontmatter

Adaptive Windowed Threshold for Box Counting Algorithm in Cytoscreening Applications

Two threshold techniques are compared in this paper for the application of the box–counting algorithm. The single threshold is sensitive on the selection of the threshold value. Application of the proposed, adaptive windowed threshold allows selection of the threshold values using standard deviation and mean value. The application of the windowed threshold allows preclassification of cell nuclei.

Dorota Oszutowska-Mazurek, Przemysław Mazurek, Kinga Sycz, Grażyna Waker-Wójciuk

Corneal Endothelial Grid Structure Factor Based on Coefficient of Variation of the Cell Sides Lengths

Description of the corneal endothelial cell grid is a valuable diagnostic pointer, used in Ophthalmology. Until now, two quality factors were used: hexagonality (H) and the relative standard deviation of the cell surface (CV). Both the factors do not take into account the length measure of the grid cells, which has been presented in an article on the sample images. The authors propose an additional factor, the average relative standard deviation of the cell sides lengths (CVSL), which takes the cells non-uniformity into account.

Jolanta Gronkowska-Serafin, Adam Piórkowski

A Distributed Approach for Development of Deformable Model-Based Segmentation Methods

This paper presents a distributed solution for the development of deformable model-based medical image segmentation methods. The design and implementation stages of the segmentation methods usually require a lot of time and resources, since the variations of the tested algorithms have to be constantly evaluated on many different data sets. To address this problem, we extended our web platform for development of deformable model-based methods with an ability to distribute the computational workload. The solution was implemented on a computing cluster of multi-core nodes with the use of the Java Parallel Processing Framework. The experimental results show significant speedup of the computations, especially in the case of resource-demanding three-dimensional methods.

Daniel Reska, Cezary Boldak, Marek Kretowski

Enhancement of Low-Dose CT Brain Scans Using Graph-Based Anisotropic Interpolation

This paper considers the problem of enhancement of low-dose CT images. These images are usually distorted by the artifact similar to ‘film grain’, which affects image quality and hinders image segmentation. The method introduced in this paper reduces influence of the distortion by retrieval of pixel intensities under the ‘grains’ using graph-based anisotropic interpolation. Results of applying the introduced method to low-dose CT scans of hydrocephalic brains are presented and discussed. The influence of the introduced method on the accuracy of image segmentation is analysed.

Tomasz Węgliński, Anna Fabijańska

Using of EM Algorithm to Image Reconstruction Problem with Tomography Noises

This paper describes an analytical iterative approach to the problem of image reconstruction from parallel projections using Expectation Minimization algorithm. The experiments with noisy measurements have shown that EM algorithm can deblur the reconstructed image. The achieved results confirm that designed reconstruction procedure is able to reconstruct an image with better quality than image obtained using the traditional back-projection algorithm.

Piotr Dobosz, Robert Cierniak

Preprocessing Using Maximal Autocovariance for Spatio–Temporal Track–Before–Detect Algorithm

The detection of local regular patterns and dependent values in heavy noised signal is proposed in this paper. The moving window approach allows computation of the maximal autocovariance of signal. The differences are emphasized using Spatio–Temporal Track–Before–Detect algorithm so tracking such objects is possible. The possibilities of this technique are shown in a few illustrative examples and discussed. The detection of weak signals hidden in the background noise is emphasized.

Przemysław Mazurek

Which Color Space Should Be Chosen for Robust Color Image Retrieval Based on Mixture Modeling

As the amount of multimedia data captured and published in Internet constantly grows, it is essential to develop efficient tools for modeling the visual data similarity for browsing and searching in voluminous image databases. Among these methods are those based on compact image representation, such as mixture modeling of the color information conveyed by the images. These methods could be efficient and robust to possible distortions of color information caused by lossy coding. Moreover, they produce a compact image representation in form of a vector of model parameters. Thus, they are well suited for task of a color image retrieval in large, heterogenous databases. This paper focuses on the proper choice of the color space in which the modeling of lossy coded color image information, based on the mixture approximation of chromaticity histogram, is evaluated. Retrieval results obtained when

RGB

,

I

1

I

2

I

3,

YUV

,

CIE

XYZ

,

CIE

L

*

a

*

b

*

,

HSx

,

LSLM

and

TSL

color spaces were employed are presented and discussed.

Maria Łuszczkiewicz-Piątek

The Perception of Humanoid Robot by Human

The article presents results of experiments which test acceptance level of a humanoid robot. We stated that in most cases people have tendencies to anthropomorphize machines, especially humanoid ones. Acting like this is a result of a social bias, so characteristic for our specie. We conducted two experiments in which participants during a poll were in a room with a robot observer, with active face and sound tracking implemented. By analyse of post review questionnaire and time parameters from video recording, we could point out that in some cases participants observed robot’s behaviours which had not taken place, but are quite natural for a human.

Rafał Stęgierski, Karol Kuczyński

A Fast Histogram Estimation Based on the Monte Carlo Method for Image Binarization

In the paper the idea of fast histogram estimation is proposed which is based on the application of the Monte Carlo method. Presented method can be useful for fast image binarization especially for low computational efficiency solutions e.g. autonomous mobile robots. Proposed method has been compared with full image analysis and the obtained estimates have been used for threshold determination and binarization using well-known Otsu method.

Piotr Lech, Krzysztof Okarma, Mateusz Tecław

Adaptation of the Combined Image Similarity Index for Video Sequences

One of the most relevant areas of research in the image analysis domain is the development of automatic image quality assessment methods which should be consistent with human perception of various distortions. During last years several metrics have been proposed as well as their combinations which lead to highly linear correlation with subjective opinions. One of the recently proposed ideas is the Combined Image Similarity Index which is the nonlinear combination of three metrics outperforming most of currently known ones for major image datasets. In this paper the applicability and extension of this metric for video quality assessment purposes is analysed and the obtained performance results are compared with some other metrics using the video quality assessment database recently developed at École Polytechnique Fédérale de Lausanne and Politecnico di Milano for quality monitoring over IP networks, known as EPFL-PoliMI dataset.

Krzysztof Okarma

Time-Frequency Analysis of Image Based on Stockwell Transform

The time-frequency representation (

TFR

) provides a powerful method for identification of the non-stationary of the signals. The paper describes the basic principle of Stockwell Transform and approach to texture image feature extraction based on 2D discrete orthonormal Stockwell transform.

Ryszard S. Choraś

Real-Time Recognition of Selected Karate Techniques Using GDL Approach

In this paper will be presented a new approach for recognition and interpretation of several karate techniques used specially defined Gesture description language (GDL). The novel contribution of this paper is validation of our new semantic Gesture Description Language classifier on several basic Karate techniques recorded with set of Kinect devices. We also present the calibration procedure that enables integration of skeleton data from set of tracking devices into one skeleton what eliminates many segmentation and tracking errors. The data set for our research contains 350 recorded sequences of qualifies professional sport (black belt) instructor, and master of Okinawa Shorin-ryu Karate. 83% of recordings were correctly classified. The whole solution runs in real-time and enables online and offline classification.

Tomasz Hachaj, Marek R. Ogiela, Marcin Piekarczyk

2DKLT-Based Color Image Watermarking

The paper presents a digital image watermarking algorithm realized by means of two-dimensional Karhunen-Loeve Transform (2DKLT). The information embedding is performed in the two-dimensional specturm of KLT. Employed two-dimensional approach is superior to standard, one-dimensional KLT, since it represents images respecting their spatial properties, resulting in a lower noise and better adaptation to the image characteristics. The principles of 2DKLT are presented as well as sample implementations and experiments, which were performed on benchmark images. We propose a measure to evaluate the quality and robustness of the watermarking process. Finally, we present a set of experiments related to the color-space, embedding variants and their parameters.The results show that the 2DKLT employed in the above application gives obvious advantages in comparison to certain standard algorithms, such as DCT, FFT and wavelets.

Paweł Forczmański

Knowledge Transformations Applied in Image Classification Task

The main goal of the article is a presentation of the usage of knowledge transformation methods in an iterative scheme of ontology building. The presented approach tries to overcome vital problems of automatic ontology building. They concern especially initial assumptions and problems of the learning. The article concentrate on the knowledge joining operation. The details of this operation are illustrated by example of building of the knowledge structure applied in a simple image recognition system. The proposed scheme is compared with other well-known approaches. The article points to conditions of a successful usage of the described methodology, i.e. developing effective search algorithms of a suitable concept structure and the efficient methods of distributing the main task of knowledge building.

Krzysztof Wójcik

Problems of Infrared and Visible-Light Images Automatic Registration

In this paper the problem of infrared and visible-light images registration is analysed. The authors propose a registration procedure that could be used for a wide range of applications. It is based on B-spline transformation and mutual information similarity measure.

Karol Kuczyński, Rafał Stęgierski

Computational Complexity Analysis of Adaptive Arithmetic Coding in HEVC Video Compression Standard

This paper presents computational complexity analysis of adaptive arithmetic coding (CABAC) in the emerging HEVC video compression technology. In particular, computational complexity of individual parts of CABAC (and the whole CABAC entropy codec in the HEVC video decoder) was measured from the point of view of video decoder side. Experiments were done using publically available HM reference software of the HEVC video codec and a set of test video sequences. The range of bitrates that can be processed in real time by CABAC entropy decoder were also evaluated for the considered in the paper implementation of CABAC.

Damian Karwowski

Object Detection and Segmentation Using Adaptive MeanShift Blob Tracking Algorithm and Graph Cuts Theory

In this paper, we present method of detection, segmentation and tracking to different objects in video sequence in real-time. We propose new approach based on Blob tracking, the technique, we find a hybrid combination between tracking-detection, in blob tracking use detection model based on two pieces of information; brightness and color. Our approach adds new properties in these blobs based on shape features extractions, where we define several properties for efficient detection. These blobs, present objects detected, the motion is estimated by non-parametric Kernel density estimation by using MeanShift algorithm to track this blobs. Segmentation is performed by GraphCuts approach; it generates and updates a set of Blobs in the sequence. Experimental results demonstrate that our method is robust for challenging data and present many advantages inside other approaches.

Boudhane Mohcine, Nsiri Benayad

Semantics Driven Table Understanding in Born-Digital Documents

This paper presents a new approach to table understanding, suitable for born-digital PDF documents. Advance beyond the current state of the art in table understanding is provided by the proposed reverse MVC method, which takes advantage of only partial logic structure loss (degradation) in born-digital PDF documents, as opposed to unrecoverable loss (deterioration) taking place in scan based PDF documents.

Jacek Siciarek

Contextual Possibilistic Knowledge Diffusion for Images Classification

In this study, an iterative contextual approach for images classification is proposed. This approach is based on the use of possibilistic reasoning in order to diffuse the possibilistic knowledge. The use of possibilistic concepts enables an important flexibility for the integration of a context-based additional semantic knowledge source formed by pixels belonging with high certainty to different semantic classes (called possibilistic seeds), into the available knowledge encoded by possibility distributions. The possibilistic seeds extraction and classification process is conducted through the application of a possibilistic contextual rule using the confidence index used as an uncertainty measure. Once possibilistic seeds are extracted and classified, possibility distributions are updated and refined in order to diffuse the possibilistic knowledge. Synthetic and real images are used in order to evaluate the performances of the proposed approach.

B. Alsahwa, S. Almouahed, D. Guériot, B. Solaiman

An Efficient 2-D Recursive Inverse Algorithm for Image De-noising

In this paper, we propose a revised version of the recently proposed two-dimensional Recursive Inverse (2-D RI) adaptive algorithm. Instead of updating the filter coefficients both along the horizontal and vertical directions on the 2-D plane, as in the old 2-D RI algorithm, our new revised algorithm performs the update process simultaneously for every element in the 2-D plane. Simulation results show that the proposed 2-D RI algorithm leads to an improved performance compared to that of the 2-D RLS algorithm and similar performance compared to 2-D RI algorithm with reduced computational complexity.

Mohammad Shukri Salman, Alaa Eleyan

Image Feature Extraction Using Compressive Sensing

In this paper a new approach for image feature extraction is presented. We used the Compressive Sensing (CS) concept to generate the measurement matrix. The new measurement matrix is different from the measurement matrices in literature as it was constructed using both zero mean and nonzero mean rows. The image is simply projected into a new space using the measurement matrix to obtain the feature vector. Another proposed measurement matrix is a random matrix constructed from binary entries. Face recognition problem was used as an example for testing the feature extraction capability of the proposed matrices. Experiments were carried out using two well-known face databases, namely, ORL and FERET databases. System performance is very promising and comparable with the classical baseline feature extraction algorithms.

Alaa Eleyan, Kivanc Kose, A. Enis Cetin

A Modification of the Parallel Spline Interpolation Algorithms

An extension of standard interpolation algorithm for image zooming is proposed. The presented method of the spline function arguments are modified, what makes the images more sharper. Implementation on Compute Unified Device Architecture is made what makes this re-sampling very fast.

Michał Knas, Robert Cierniak

Multi-object Tracking System

The article describes the multi-object tracking system based on new approach to object management after preprocessing and background modeling. Object manager determine correlation between objects in previous and current frame by matching features. For matching features algorithm use color histogram with a small number of bins. Each moving object extracted from the scene is assigned to an individual and independent Kalman filter. System stores information about real position of the objects extracted directly from image processing and keep information about centroids predicted by Kalman filter.

Jacek Zawistowski, Piotr Garbat, Paweł Ziubiński

Local Eigen Background Substraction

The article describes the extended background modeling based on the well-known Eigenbackground method. The idea presented in the article expands the Eigenbackground method, breaking the scene into many smaller ones, modeling the background separately for each of its sections. This approach allows for better separation of the foreground objects, and better modeling of spots in which the light changes. Furthermore, it also ensures efficient implementation of the algorithm for CUDA graphics cards, separating particular local models into threads. In the future the approach shall also enable users to update models more efficiently when locally there occurs no movement in a given section.

Paweł Ziubiński, Piotr Garbat, Jacek Zawistowski

Polarization Imaging in 3D Shape Reconstrucion

We report a new polarimetric imaging setup based on tunable liquid crystal components and its application in computer vision. The analysis of polarization parameters (polarimetric imaging) has been used in photometric 3D shape reconstruction. This research describes a novel approach for 3D shape measurement system supported by polarization image analysis. Enhancement of fringe image quality is realized using the detector unit with special liquid crystal filter.

Piotr Garbat, Jerzy Woźnicki

Background Subtraction and Movement Detection Algorithms in Dynamic Environments – An Evaluation for Automated Threat Recognition Systems

A background subtraction and movement detection is a very popular subject of investigation in the video processing domain. Despite number of already proposed algorithms and methods the question of suitability of such algorithms in dynamically changing, realistic environment holds (e.g. parking lots). In this paper authors compare three different implementations of the saliency-based algorithms and Gaussian Mixture Model algorithm for different cameras on a parking lot. Authors show that matching algorithms to the scene can be improved by managing semantic knowledge about the scene.

Adam Flizikowski, Mirosław Maszewski, Marcin Wachowiak, Grzegorz Taberski

Layer Image Components Geometry Optimization

Digital image representation can significantly limit a geometry of the layer components when converting it from its analog form (GERBER). In this case single pixel representation is fixed to the given gray level or state in the case of black and white format. This process can introduce dimensional deviation of the image components on the level up to 100% of the pixel size. Proposed method has a goal to maintain those limit and reduce it to minimal possible to reach level. Result of experimental research are presented.

Adam Marchewka, Jarosław Zdrojewski

A Hybrid Image Compression Technique Using Neural Network and Vector Quantization With DCT

Image and video transmissions require particularly large bandwidth and storage space. Image compression technology is therefore essential to overcome these problems. Practically efficient compression systems based on hybrid coding which combines the advantages of different methods of image coding have also being developed over the years. In this paper, different hybrid approaches to image compression are discussed. Hybrid coding of images, in this research, deals with combining three approaches to enhance the individual methods and achieve better quality reconstructed images with higher compression ratio. In this paper A new Hybrid neural-network, vector quantization and discrete cosine transform compression method is presented. This scheme combines the high compression ratio of Neural network (NN) and Vector Quantization (VQ) with the good energy-compaction property of Discrete Cosine Transform (DCT). In order to increase the compression ratio while preserving decent reconstructed image quality, Image is compressed using Neural Network, then take the hidden layer outputs as input to re-compress it using vector quantization (VQ), while DCT was used the code books block. Simulation results show the effectiveness of the proposed method. The performance of this method is compared with the available jpeg compression technique over a large number of images, showing good performance.

Mohamed El Zorkany

Gabor Wavelet Recognition Approach for Off-Line Handwritten Arabic Using Explicit Segmentation

This article proposes an un-constrained recognition approach for the handwritten Arabic script. The approach starts by explicitly segment each word image into its constituent letters, then a filter-bank of Gabor wavelet transform is used to extract feature vectors corresponding to different scales and orientation in the segmented image. Classification is carried out by employing a support vectors machine algorithm, where IESK-arDB and IFN/ENIT databases are used for testing and evaluation of the proposed approach respectively. A Leave-one-out estimation strategy is followed to assess performance, where results confirmed the approach efficiency.

Moftah Elzobi, Ayoub Al-Hamadi, Zaher Al Aghbari, Laslo Dings, Anwar Saeed

Construction of Sequential Classifier Based on MacArthur’s Overlapping Niches Model

This paper presents the problem of building the sequential model of the classification task. In our approach the structure of the model is built in the learning phase of classification. In this paper a split criterion based on the MacArthur’s overlapping niches model is proposed. The MacArthur’s overlapping niches distribution is created for each row of the confusion matrix. The split criterion is associated with the analysis of the received distributions. The obtained results were verified on ten data sets. Nine data sets come from UCI repository and one is a real-life data set.

Robert Burduk, Paweł Trajdos

Extraction of Data from Limnigraf Chart Images

This article presents a system of data extraction from limnograph chart images. Proposed system will be composed of five main modules that are: extraction of axis and grid, text detection, graph vectorization, calibrations and read data using image content analysis algorithms. In the paper the fundamental characteristics of the system are presented including the simplified scheme of system modules.

Adam Marchewka, Rafał Pasela

A Simplified Visual Cortex Model for Efficient Image Codding and Object Recognition

In this article a simplified model of biologically inspired mechanisms for an object recognition is presented. The proposed approach is based on the HMAX hierarchical cortex model that was proposed by Riesenhuber and Poggio [1] and later extended by Serre et al [2]. The work described in this paper is an extension of a previous research [3, 4, 5, 6] focused on a computer vision software (named SMAS - Stereovision Mobility Aid System) dedicated for visually impaired persons. Therefore, the emphasis here is put on a one-class detection problem of dangerous objects with the possibility of a future deployment of the proposed solution on a mobile device. The conducted experiments show that the introduced modifications of the hierarchical HMAX model allows for an efficient feature extraction and a visual information coding without decreasing the effectiveness of an object detection process.

Rafał Kozik

Communications

Frontmatter

Network Structures Constructed on Basis of Chordal Rings 4th Degree

In this paper, on analysis of the properties of modified chordal ring 4

$^\textrm{th}$

degree topologies has been presented. Two special types of these structures, namely optimal and ideal graphs, have been defined. The basic parameters (diameter and average path distance) were calculated and described by approximate formulas which makes possible to evaluate these parameters for any modified graph proposed by authors, and it gives possibility to model the properties of these networks without requiring any specific path calculation between pairs of nodes which may be time and resource consuming for large scale systems. In the last part of paper the comparison of the analyzing structures and reference graph has been carry out.

Damian Ledziński, Sławomir Bujnowski, Tomasz Marciniak, Jens Myrup Pedersen, José Gutierrez Lopez

Fuzzy Rule-Based Systems for Optimizing Power Consumption in Data Centers

One of the most important aspects in cloud computing is the infraestructure as a service (IaaS). In the basic cloud service model, providers offers virtual machines and solutions based on virtualization. An user pays for consumption of resources (disk space, virtual local area networks, etc.). A data center is a facility used to house computer systems to provide IaaS. Large data centers consume a lot of electricity (high power consumption) and are a source of environmental pollution and costs, so it is important to improve their performance. In this paper a fuzzy rule-based system is proposed to schedule virtual machines in a data center based on Green Computing concepts: minimum power consumption as performance index is considered. This approach is compared to classic scheduling algorithms in literature.

Moad Seddiki, Rocío Pérez de Prado, José Enrique Munoz-Expósito, Sebastián García-Galán

A Scalable Distributed Architecture for Emulating Unreliable Networks for Testing Multimedia Systems

This paper presents a software-based approach to emulating unreliable WAN networks in a LAN environment, without interfering in the configuration of the latter. A program must only be installed on all computers which host a multimedia system to be tested, which intercepts outgoing packets and forwards them to an emulation proxy, where, in accordance with a connection model, they are rejected or delayed before being submitted to the destination computer. The proxy collects packet header data, supplemented with timestamps, and sends them to a warehouse server which stores the report about the network traffic of the tested application. By analyzing such reports and observing how programs react to packet losses and delays, multimedia systems can be evaluated for correctness, performance, and tolerance to network failures. Using the Java and C programming languages, a prototype of such an emulation architecture has been implemented together with GUI-based tools for modeling connections, supervising experiments, and analyzing traffic reports.

Marek Parfieniuk, Tomasz Łukaszuk, Tomasz Grześ

An Algorithm for Finding Shortest Path Tree Using Ant Colony Optimization Metaheuristic

This paper introduces the ShortestPathTreeACO algorithm designed for finding near-optimal and optimal solutions for the shortest path tree problem. The algorithm is based on Ant Colony Optimization metaheuristic, and therefore it is of significant importance to choose proper operation parameters that guarantee the results of required quality. The operation of the algorithm is explained in relation to the pseudocode introduced in the paper. An exemplary execution of the algorithm is depicted and discussed on a step-by-step basis. The experiments carried out within the custom-made framework of the experiment are the source of suggestions concerning the parameter values. The influence of the choice of the number of ants and the pheromone evaporation speed is investigated. The quality of generated solutions is addressed, as well as the issues of execution time.

Mariusz Głąbowski, Bartosz Musznicki, Przemysław Nowak, Piotr Zwierzykowski

A Sensor Network Gateway for IP Network and SQL Database

This article presents the concept of building the access gate between a sensor network and Internet. A sensor network can be built in any standard, through wire or wireless (ZigBee, Bluetooth or Wi-Fi). Data transmission was made by higher layers of ISO/OSI web model, with local caching and data saving on the SQL server. The developed gateway has to ensure proper data transfer in different conditions concerning the bitrate and latency. The developed solution includes a variety of operating systems.

Piotr Lech

Multi-stage Switching Networks with Overflow Links for a Single Call Class

This article proposes a new analytical model of a multi-stage switching network with a system of overflow links in the first stage of the network. The initial assumption in the study was that the system of overflow links would be used by one class of calls. The article presents the dependencies between the internal blocking probability and the capacity of the overflow link. The results of the analytical calculations are then compared with the results of the simulations of multi-stage switching networks. The present study has confirmed fair accuracy of the proposed method and proved the validity of the implementation of overflow links in switching networks.

Mariusz Głąbowski, Michał Dominik Stasiak

Comparison of Different Path Composition Approaches to Multicast Optimization

In this paper the different algorithms that are based on the different interpretations of the path composition have been evaluated and compared. A new technique – the Aggregated MLARAC has been proposed and described. Two different ways of the algorithm performance evaluation have been utilized in order to present the different aspects of the considered algorithms.

Krzysztof Stachowiak, Piotr Zwierzykowski

A New Testing Method of Routing Protocols for Wireless Mesh Networks

The paper presents a new testing method of routing protocols for wireless MESH networks. This method combines the advantages of tests carried out on physical devices and in simulation environment. The presented solution uses open source software. An important advantage of the proposed method is its scalability and lack of necessity of developing two dedicated implementations for the experiments in testbeds and for the simulation experiment. It is enough to implement only one universal solution for performing both types of tests. This advantage makes the presented solution worth using while testing the new systems.

Adam Kaliszan, Mariusz Głąbowski, Sławomir Hanczewski

Performance Evaluation of Multicast Routing Algorithms with Fuzzy Sets

The paper presents a proposal of a new methodology that evaluates multicast routing algorithms in packet-switched networks with an application of fuzzy sets. Proposed multicriteria mechanism evaluate representative multicast routing algorithms: KPP, CSPT and MLRA (

Multicast Routing Algorithm with Lagrange Relaxation

) that minimize cost of paths between source and each destination node using Lagrange relaxation, and finally, minimize the total cost of multicast tree. A wide range of simulation research carried out by the authors, confirmed both the accuracy of new methodology and the effectiveness of the proposed algorithm.

Maciej Piechowiak, Piotr Prokopowicz

Performance of IGMP Protocol on Embedded Systems with OpenWRT

Effective use of multicast technology can significantly reduce network load, especially in networks that support multimedia transmission. This technique has reported number of implementations due to increasing interest in audiovisual technology and the capabilities of modern networks. However, multicast is still mystery for many programmers, network administrators and ordinary users who could make it more popular, and utilize this promising technique of transmission [1, 2, 11]. The article discusses the IP Multicast technology and focuses on the study of IGMP on different hardware platforms – the Cisco 2611 router and the ASUS WL500g Premium (installed with OpenWRT Linux operating system).

Maciej Piechowiak, Michał Chmielewski, Paweł Szyszkowski

The Threat of Digital Hacker Sabotage to Critical Infrastructures

In this paper, we analyze the threat of digital sabotage, specifically Denial of Service (DoS) attacks, to critical infrastructures such as power plants, Intelligent Transportation Systems, airports, and similar. We compare the profile of critical infrastructure installations to known attacker profiles to establish which attackers are most likely to be a threat, thereby creating a more precise threat picture to help prioritize digital defence efforts in critical infrastructure. The main contribution of the paper is the identification of which hacker categories are most probably to attack critical infrastructures. Together with the profiles of the hacker categories this can be used for identifying appropriate countermeasures against potential attacks.

Sara Ligaard Norgaard Hald, Jens Myrup Pedersen

Machine Learning Techniques for Cyber Attacks Detection

The increased usage of cloud services, growing number of users, changes in network infrastructure that connect devices running mobile operating systems, and constantly evolving network technology cause novel challenges for cyber security that have never been foreseen before. As a result, to counter arising threats, network security mechanisms, sensors and protection schemes have also to evolve in order to address the needs and problems of nowadays users.

Rafał Kozik, Michał Choraś

Genetic Algorithm for Scheduling Routes in Public Transport

In this paper a genetic algorithm for scheduling routes in public transport is presented. It combines bus, light rail and metro, with access to other sea and air communication nodes. Results are compared with the shortest path routing algorithm Dijkstra, optimizing the distance and generation of a greenhouse gas as

CO

2

. The proposed algorithm has a computational cost advantage compared to shortest path algorithms.

Maria de los Angeles Sáez Blázquez, Sebastián García-Galán, José Enrique Munoz-Expósito, Rocío Pérez de Prado

Routing and Spectrum Assignment in Spectrum Flexible Transparent Optical Networks

In this paper the heuristic algorithm for the selection lightpaths in flexible transparent optical networks has been presented. The considered problem of routing and spectrum assignment (RSA) takes into consideration minimizing the length of the transmission distance under spectrum continuity constraints and relationship between the traffic bitrates and the spectrum bandwidth. The proposed algorithm constitutes modification the well-known Dijkstra’s algorithm. The obtained results show that a significant reduction in the number of rejected requests can be achieved by an appropriate selection of the scheme spectrum segments in the aggregated spectrum of the path.

Ireneusz Olszewski

Backmatter

Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise