Skip to main content

2014 | Buch

Innovations in Bio-inspired Computing and Applications

Proceedings of the 4th International Conference on Innovations in Bio-Inspired Computing and Applications, IBICA 2013, August 22 -24, 2013 - Ostrava, Czech Republic

insite
SUCHEN

Über dieses Buch

This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at IBICA2013, the 4th International Conference on Innovations in Bio-inspired Computing and Applications. The aim of IBICA 2013 was to provide a platform for world research leaders and practitioners, to discuss the full spectrum of current theoretical developments, emerging technologies, and innovative applications of Bio-inspired Computing. Bio-inspired Computing is currently one of the most exciting research areas, and it is continuously demonstrating exceptional strength in solving complex real life problems. The main driving force of the conference is to further explore the intriguing potential of Bio-inspired Computing. IBICA 2013 was held in Ostrava, Czech Republic and hosted by the VSB - Technical University of Ostrava.

Inhaltsverzeichnis

Frontmatter
Power Output Models of Ordinary Differential Equations by Polynomial and Recurrent Neural Networks
Abstract
The production of renewable energy sources is unstable, influenced a weather frame. Photovoltaic power plant output is primarily dependent on the solar illuminance of a locality, which is possible to predict according to meteorological forecasts (Aladin). Wind charger power output is induced mainly by a current wind speed, which depends on several weather standings. Presented time-series neural network models can define incomputable functions of power output or quantities, which direct influence it. Differential polynomial neural network is a new neural network type, which makes use of data relations, not only absolute interval values of variables as artificial neural networks do. Its output is formed by a sum of fractional derivative terms, which substitute a general differential equation, defining a system model. In the case of time-series data application an ordinary differential equation is created with time derivatives. Recurrent neural network proved to form simple solid time-series models, which can replace the ordinary differential equation description.
Ladislav Zjavka, Václav Snášel
An Experimental Analysis of Reservoir Parameters of the Echo State Queueing Network Model
Abstract
During the last years, there has been a growing interest in the Reservoir Computing (RC) paradigm. Recently, a new RC model was presented under the name of Echo State Queueing Networks (ESQN). This model merges ideas from Queueing Theory and one of the two pioneering RC techniques, Echo State Networks. In a RC model there is a dynamical system called reservoir which serves to expand the input data into a larger space. This expansion can enhance the linear separability of the data. In the case of ESQN, the reservoir is a Recurrent Neural Network composed of spiking neurons which fire positive and negative signals. Unlike other RC models, an analysis of the dynamics behavior of the ESQN system is still to be done. In this work, we present an experimental analysis of these dynamics. In particular, we study the impact of the spectral radius of the reservoir in system stability. In our experiments, we use a range of benchmark time series data.
Sebastián Basterrech, Václav Snášel, Gerardo Rubino
Measuring Phenotypic Structural Complexity of Artificial Cellular Organisms
Approximation of Kolmogorov Complexity with Lempel-Ziv Compression
Abstract
Artificial multi-cellular organisms develop from a single zygote to different structures and shapes, some simple, some complex. Such phenotypic structural complexity is the result of morphogenesis, where cells grow and differentiate according to the information encoded in the genome. In this paper we investigate the structural complexity of artificial cellular organisms at phenotypic level, in order to understand if genome information could be used to predict the emergent structural complexity. Our measure of structural complexity is based on the theory of Kolmogorov complexity and approximations. We relate the Lambda parameter, with its ability to detect different behavioral regimes, to the calculated structural complexity. It is shown that the easily computable Lempel-Ziv complexity approximation has a good ability to discriminate emergent structural complexity, thus providing a measurement that can be related to a genome parameter for estimation of the developed organism’s phenotypic complexity. The experimental model used herein is based on 1D, 2D and 3D Cellular Automata.
Stefano Nichele, Gunnar Tufte
Fuzzy Rules and SVM Approach to the Estimation of Use Case Parameters
Abstract
Many decisions that are needed for the planning of the software development project are based on previous experience and competency of project manager. One of the most important questions is how much effort will be necessary to complete the task. In our case, the task is described by the use case and manger has to estimate the effort to implement it. However, such estimations are not always correct, not estimated extra work has to be done sometimes. Our intent is to support manager’s decision by the estimation tool that uses know parameters of the use cases to predict other parameters that has to be estimated. This paper focuses on the usage of our method on the real data and evaluates its results in real development. The method uses parameterized use case model trained from the previously done use cases to predict extra work parameter. Estimation of test use cases is done several times according to the managers needs during the project execution.
Svatopluk Štolfa, Jakub Štolfa, Pavel Krömer, Ondřej Koběrský, Martin Kopka, Václav Snášel
Multi Objective Optimization Strategy Suitable for Virtual Cells as a Service
Abstract
Performance guarantee and management complexity are critical issues in delivering next generation infrastructure as a service (IAAS) cloud computing model. This is normally attributed to the current size of datacenters that are built to enable the cloud services. A promising approach to handle these issues is to offer IAAS from a subset of the datacenter as a, biologically inspired, virtual service cell. However, this approach requires effective strategies to ensure efficient use of datacenter resources while maintaining high performance and functionality for the service cells. We present a multi-objective and multi-constraint optimization (MOMCO) strategy based on genetic algorithm to the problem of resource placement and utilization suitable for virtual service cell model. We apply a combination of NSGA-II with various crossover strategies and population sizes to test our optimization strategy. Results obtained from our simulation experiment shows significant improvement on acceptance rate over non optimized solutions.
Ibrahim Kabiru Musa, Walker Stuart
An Application of Process Mining to Invoice Verification Process in SAP
Abstract
There are many processes in companies that are enacted many times every day. The key issue of the company management is to control the company cash flow and try to optimize the cost of everyday operations. There are many ways how to support the process enactment, but at the end, when there are some data from the process usage, the analysis of the efficiency is needed. One of the ways how to analyze the process and effectively analyze the process data is to use process mining methods. In this paper, we present the usage of process mining methods to real invoicing process and show the possible impact of the results to the process or organizational improvement.
Jakub Štolfa, Martin Kopka, Svatopluk Štolfa, Ondřej Koběrský, Václav Snášel
Emergent Induction of Deterministic Context-Free L-system Grammar
Abstract
L-system is a bio-inspired computational model to capture growth process of plants. This paper proposes a new noise-tolerant grammatical induction LGIC2 for deterministic context-free L-systems. LGIC2 induces L-system grammars from a transmuted string mY, employing an emergent approach in order to enforce its noise tolerance. In the method, frequently appearing substrings are extracted from mY to form grammar candidates. A grammar candidate is used to generate a string Z; however, the number of grammar candidates gets huge, meaning enormous computational cost. Thus, how to prune grammar candidates is vital here. We introduce a couple of techniques such as pruning by frequency, pruning by goodness of fit, and pruning by contractive embedding. Finally, several candidates having the strongest similarities between mY and Z are selected as the final solutions. Our experiments using insertion-type transmutation showed that LGIC2 worked very nicely, much better than an enumerative method LGIC1.
Ryohei Nakano
Double Expert System for Monitoring and Re-adaptation of PID Controllers
Abstract
Finding of monitoring systems for deciding if or how re-adapt a PID controller in literature is not so complicated. These monitoring systems are also widely used in industry. But monitoring system which is based on non-conventional methods for deciding, takes into account the non-numeric terms and it is open for adding more rules, is not so common. Presented monitoring is designed for systems of second order and it is performed by the fuzzy expert system of Mamdani type with two inputs - settling time compared with the previous settling time (relative settling time) and overshoot. It is supplemented by using of non-conventional method for designing of classic PID controller. So it can be called as double expert system for monitoring and following re-adaptation of classical PID controller. The proof of efficiency of the proposed method and a numerical experiment is presented by the simulation in the software environment Matlab-Simulink.
Jana Nowaková, Miroslav Pokorný
Multiplier System in the Tile Assembly Model with Reduced Tileset-Size
Abstract
Previously a 28-tile multiplier system which computes the product of two numbers was proposed by Brun. However the tileset-size is not optimal. In this paper we prove that multiplication can be carried out using less tile types while maintaining the same time efficiency: we propose two new tile assembly systems, both can deterministically compute A*B for given A and B in constant time. Our first system requires 24 computational tile types while our second system requires 16 tile types, which achieve smaller constants than Brun’s 28-tile multiplier system.
Xiwen Fang, Xuejia Lai
A Bézier Curve-Based Approach for Path Planning in Robot Soccer
Abstract
This paper presents an efficient, Bézier curve-based path planning approach for robot soccer, which combines the function of path planning, obstacle avoidance, path smoothing and posture adjustment together. The locations of obstacles are considered as control points of Bézier curve, then according to the velocity and orientation of end points, a smooth curvilinear path can be planned in real time. For the sake of rapid reaching, it is necessary to decrease the turning radius. Therefore a new construction of curve is proposed to optimize the shape of Bézier path.
Jie Wu, Václav Snášel
Adaptively Nearest Feature Point Classifier for Face Recognition
Abstract
In this paper, an improved classifier based on the concept of feature line space, called as adaptively nearest feature point classifier (ANFP) is proposed for face recognition. ANFP classifier uses the new metric, called as adaptively feature point metric, which is different from metrics of NFL and the other classifiers. ANFP gain better performance than NFL classifier and some others classifiers based on feature line space, which is proved by the experiment result on Yale face database.
Qingxiang Feng, Jeng-Shyang Pan, Lijun Yan, Tien-Szu Pan
Comparison of Classification Algorithms for Physical Activity Recognition
Abstract
The main aim of this work is to compare different algorithms for human physical activity recognition from accelerometric and gyroscopic data which are recorded by a smartphone. Three classification algorithms were compared: the Linear Discriminant Analysis, the Random Forest, and the K-Nearest Neighbours. For better classification performance, two feature extraction methods were tested: the Correlation Subset Evaluation Method and the Principal Component Analysis. The results of experiment were expressed by confusion matrixes.
Tomáš Peterek, Marek Penhaker, Petr Gajdoš, Pavel Dohnálek
Modular Algorithm in Tile Self-assembly Model
Abstract
In this paper we propose a system computing A mod B for given n A -bit binary integer A and n B -bit binary integer B, which is the first system directly solving the modulus problem in tile assembly model. The worst-case assembly time of our system is Θ(n A (n A  − n B )) and the best-case assembly time is Θ(n A ).
Although the pre-existing division system which computes A/B can also be used to compute A mod B, the assembly time of this system is not ideal in some cases. Compared with the pre-existing division system, we achieved improved time complexity in our system. Our advantage is more significant if n A is much greater than n B .
Xiwen Fang, Xuejia Lai
LLLA: New Efficient Channel Assignment Method in Wireless Mesh Networks
Abstract
Wireless mesh networks (WMNs) have emerged as a promising technology for providing ubiquitous access to mobile users, and quick and easy extension of local area networks into a wide area. Channel assignment problem is proven to be an NP-complete problem in WMNs. This paper aims proposing a new method to solve channel assignment problem in multi-radio, multichannel wireless mesh networks for improving the quality of communications in the network. Here, a new hybrid state channel assignment method is employed. This paper proposes a Link-Layard Protocol and Learning Automata (LLLA) to achieve a smart method for suitable assignment. Simulation results show that the proposed algorithm has better results compared to AODV method. E.g., it reduces the packet drop considerably without degrading.
Mohammad Shojafar, Zahra Pooranian, Mahdi Shojafar, Ajith Abraham
Binary Matrix Pseudo-division and Its Applications
Abstract
The benefit of each key algorithm also depends on many additional supporting algorithms it uses. It turned out that class of problems related to dimensionality reduction of binary spaces used for statistical analysis of binary data, e.g. binary (Boolean) factor analysis, is dependent on the possibility and ability of performing pseudo-division of binary matrices. The paper presents novelty computation approach to it, giving an algorithm for reasonably fast exact solution.
Aleš Keprt
Nature Inspired Phenotype Analysis with 3D Model Representation Optimization
Abstract
In biology 3D models are made to correspond with true nature so as to use these models for a precise analysis. The visualization of these models helps in the further understanding and conveying of the research questions. Here we use 3D models in gaining understanding on branched structures. To that end we will make use of L-systems and will attempt to use the results of our analysis for the gaining of understanding of these L-systems. To perform our analysis we will have to optimize the 3D models. There are lots of different methods to produce such 3D model. For the study of micro-anatomy, however, the possibilities are limited. In planar sampling, the resolution in the sampling plane is higher than the planes perpendicular to the sampling plane. Consequently, 3D models are under sampled along, at least, one axis. In this paper we present a pipeline for reconstruction of a stack of images. We devised a method to convert the under sampled stack of contours into a uniformly distributed point cloud. The point cloud as a whole is integrated in construction of a surface that accurately represents the shape. In the pipeline the 3D dataset is processed and its quality gradually upgraded so that accurate features can be extracted from under sampled dataset.
The optimized 3D models are used in the analysis of phenotypical differences originating from experimental conditions by extracting related shape features from the model. We use two different sets of 3D models. We investigate the lactiferous duct of newborn mice to gain understanding of environmental directed branching. We consider that the lactiferous duct has an innate blue-print of its arborazation and assume this blue-print is kind of encoded in an innate L-system. We analyze the duct as it is exposed to different environmental conditions and reflect on the effect on the innate L-system. In order to make sure we can extract the branch structure in the right manner we analyze 3D models of the zebrafish embryo; these are simpler compared to the lactiferous duct and will ensure us that measuring features can result in the separation of different treatments on the basis of differences in the phenotype.
Our system can deal with the complex 3D models, the features separate the experimental conditions. The results provide a means to reflect on the manipulation of an L-system through external factors.
Lu Cao, Fons J. Verbeek
Multi-class SVM Based Classification Approach for Tomato Ripeness
Abstract
This article presents a content-based image classification system to monitor the ripeness process of tomato via investigating and classifying the different maturity/ripeness stages. The proposed approach consists of three phases; namely pre-processing, feature extraction, and classification phases. Since tomato surface color is the most important characteristic to observe ripeness, this system uses colored histogram for classifying ripeness stage. It implements Principal Components Analysis (PCA) along with Support Vector Machine (SVM) algorithms for feature extraction and classification of ripeness stages, respectively. The datasets used for experiments were constructed based on real sample images for tomato at different stages, which were collected from a farm at Minia city. Datasets of 175 images and 55 images were used as training and testing datasets, respectively. Training dataset is divided into 5 classes representing the different stages of tomato ripeness. Experimental results showed that the proposed classification approach has obtained ripeness classification accuracy of 92.72%, using SVM linear kernel function with 35 images per class for training.
Esraa Elhariri, Nashwa El-Bendary, Mohamed Mostafa M. Fouad, Jan Platoš, Aboul Ella Hassanien, Ahmed M. M. Hussein
Solving Stochastic Vehicle Routing Problem with Real Simultaneous Pickup and Delivery Using Differential Evolution
Abstract
In this study, Stochastic VRP with Real Simultaneous Pickup and Delivery (SVRPSPD) is attempted the first time and fitted to a public transportation system in Anbessa City Bus Service Enterprise (ACBSE), Addis Ababa, Ethiopia. It is modeled and fitted with real data obtained from the enterprise. Due to its complexity, large instances of VRP and/or SVRPSPD are hard to solve using exact methods. Instead, various heuristic and metaheuristic algorithms are used to find feasible VRP solutions. In this work the Differential Evolution (DE) is used to optimize bus routes of ACBSE. The findings of the study shows that, DE algorithm is stable and able to reduce the estimated number of vehicles significantly. As compared to the traditional and exact algorithms it has exhibited better used fitness function.
Eshetie Berhan, Pavel Krömer, Daniel Kitaw, Ajith Abraham, Václav Snášel
An Intelligent Multi-agent Recommender System
Abstract
This article presents a Multi-Agent approach for handling the problem of recommendation. The proposed system works via two main agents; namely, the matching agent and the recommendation agent. Experimental results showed that the proposed rough mereology based Multi-agent system for solving the recommendation problem is scalable and has possibilities for future modification and adaptability to other problem domains. Moreover, it succeeded in reducing the information overload while recommending relevant decisions to users. The system achieved high accuracy in ranking using users profile and information system profiles. The resulted value of the Mean Absolute Error (MAE) is acceptable compared to other recommender systems applied other computational intelligence approaches.
Mahmood A. Mahmood, Nashwa El-Bendary, Jan Platoš, Aboul Ella Hassanien, Hesham A. Hefny
Geodata Scale Restriction Using Genetic Algorithm
Abstract
With recent advances in computer sciences (including geosciences) it is possible to combine various methods for geodata processing. There are many methods established for geodata scale restriction, but none of these take into account the concept of information entropy. Our research focused on using genetic algorithm that calculates information entropy in order to set an optimal number of intervals from original non-restricted geodata. We used fitness function by minimizing information entropy loss and we compared the results with commonly used classification method in geosciences. We propose an experimental method that provides promising approach for geodata scale restriction and consequent proper visualization, which is very important for geographical phenomena interpretation.
Jiří Dvorský, Vít Pászto, Lenka Skanderová
Principal Component Analysis Neural Network Hybrid Classification Approach for Galaxies Images
Abstract
This article presents an automatic hybrid approach for galaxies images classification based on principal component analysis (PCA) neural network and moment-based features extraction algorithms. The proposed approach is consisted of four phases; namely image denoising, feature extraction, reduct generation, and classification phases. For the denoising phase, noise pixels are removed from input images, then input galaxy image is normalized to a uniform scale and Hu seven invariant moment algorithm is applied to reduce the dimensionality of the feature space during the feature extraction phase. Subsequently, for reduct generation phase, attributes in the information system table that is more important to the knowledge is generated as a subset of attributes. Rough set is used as feature reduction approach. The subset of attributed, which is called a reduct, is fully characterizing the knowledge in the database. Finally, during the classification phase, principal component analysis neural network algorithm is utilized for classifying the input galaxies images into one of four obtained source catalogue types. Experimental results showed that combining PCA and rough set as feature reduction techniques along with invariant moments for feature extraction provided better classification results than having no rough set feature reduction technique applied. It is also concluded that a small set of features is sufficient to classify galaxy images and provide a fast classification.
Mohamed Abd. Elfattah, Nashwa El-Bendary, Mohamed A. Abou Elsoud, Jan Platoš, Aboul Ella Hassanien
Comparison of Crisp, Fuzzy and Possibilistic Threshold in Spatial Queries
Abstract
Decision making is one of the most important application areas of geoinformatics. Such support is mainly oriented on the identification of locations that fulfil certain criterion. The contribution presents the suitability of various approaches of spatial query using different types of Fuzzy thresolds. Presented methods are based on the classical logic (Crisp queries), Fuzzy logic (Fuzzy queries) and Possibility theory (Possibilistic Queries). All presented approaches are applied in the case study. Use these findings may contribute to the better understanding of the nature of the methods used and can help to obtain more accurate results, which have a determining influence on subsequent decision-making process.
Jan Caha, Alena Vondráková, Jiří Dvorský
Visualizing Clusters in the Photovoltaic Power Station Data by Sammon’s Projection
Abstract
This paper presents results of the finding and the visualization cluster in the hourly recorded data of power from the small photovoltaic power station. Our main aim was to evaluate the use of Sammon’s projection for visualizing clusters in the data of power. The photovoltaic power station is sensitive for changes according to the sun’s light power. Although one can think that sunny days are the same the power of the sun light is very volatile during a day. When we wanted to analyse the efficiency of the power station, it was necessary to use some kind of clustering method. We propose the clustering method based on social network algorithms and the result is visualized by the Sammon’s projection for explorational analysis.
Martin Radvanský, Miloš Kudělka, Václav Snášel
Rough Power Set Tree for Feature Selection and Classification: Case Study on MRI Brain Tumor
Abstract
This article presents a feature selection and classification system for 2D brain tumors from Magnetic resonance imaging (MRI) images. The proposed feature selection and classification approach consists of four main phases. Firstly, clustering phase that applies the K-means clustering algorithm on 2D brain tumors slices. Secondly, feature extraction phase that extracts the optimum feature subset via using the brightness and circularity ratio. Thirdly, reduct generation phase that uses rough set based on power set tree algorithm to choose the reduct. Finally, classification phase that applies Multilayer Perceptron Neural Network algorithm on the reduct. Experimental results showed that the proposed classification approach achieved a high recognition rate compared to other classifiers including Naive Bayes, AD-tree and BF-tree.
Waleed Yamany, Nashwa El-Bendary, Hossam M. Zawbaa, Aboul Ella Hassanien, Václav Snášel
The Nelder-Mead Simplex Method with Variables Partitioning for Solving Large Scale Optimization Problems
Abstract
This paper presents a novel method to solve unconstrained continuous optimization problems. The proposed method is called SVP (simplex variables partitioning). The SVP method uses three main processes to solve large scale optimization problems. The first process is a variable partitioning process which helps our method to achieve high performance with large scale and high dimensional optimization problems. The second process is an exploration process which generates a trail solution around a current iterate solution by applying the Nelder-Mead method in a random selected partitions. The last process is an intensification process which applies a local search method in order to refine the the best solution so far. The SVP method starts with a random initial solution, then it is divided into partitions. In order to generate a trail solution, the simplex Nelder-Mead method is applied in each partition by exploring neighborhood regions around a current iterate solution. Finally the intensification process is used to accelerate the convergence in the final stage. The performance of the SVP method is tested by using 38 benchmark functions and is compared with 2 scatter search methods from the literature. The results show that the SVP method is promising and producing good solutions with low computational costs comparing to other competing methods.
Ahmed Fouad Ali, Aboul Ella Hassanien, Václav Snášel
SVM-Based Classification for Identification of Ice Types in SAR Images Using Color Perception Phenomena
Abstract
In rise of global temperatures, the formation of ice in freshwater like rivers and lakes are apparent to high condition which has to be significantly monitored for the importance of forecasting and hydropower generation. For this research, Synthetic Aperture Radar (SAR) based images gives good support in mapping the variation between the remote sensing data analysis. This paper presents an approach to map the different target signatures available in the radar image using support vector machine by providing limited amount of reference data. The proposed methodology takes a preprocess expansion of transforming the grayscale image into a synthetic color image which is often used with radar data to improve the display of subtle large-scale features. Hue Saturation Value based sharpened Synthetic Aperture Radar images are used as the input to supervised classifier in which evaluation metrics are considered to assess both the phase of the approach. Based on the evaluation, Support Vector Machine classifier with linear kernel has been known to strike the right balance between accuracy obtained on a given finite amount of training patterns and the facility to generalize to undetected data.
Parthasarty Subashini, Marimuthu Krishnaveni, Bernadetta Kwintiana Ane, Dieter Roller
Evaluation of Electrocardiographic Leads and Establishing Significance Intra-individuality
Abstract
The theme of this work was to design and program implementation process electrocardiogram Frank leakage system. Especially when this realization were preprocessed ECG real data, which means the modification of data filtration. The work focuses on the optimal detection of ventricular complex electrocardiographic signals. The main focus of the work is the calculation and graphical presentation vectorcardiogram orthogonal leads. Using a statistical analysis of the results is evaluated for each VCG planes.
Marek Penhaker, Monika Darebnikova, Frantisek Jurek, Martin Augustynek
Backmatter
Metadaten
Titel
Innovations in Bio-inspired Computing and Applications
herausgegeben von
Ajith Abraham
Pavel Krömer
Václav Snášel
Copyright-Jahr
2014
Electronic ISBN
978-3-319-01781-5
Print ISBN
978-3-319-01780-8
DOI
https://doi.org/10.1007/978-3-319-01781-5