Skip to main content
Top

2011 | Book

Innovative Computing Technology

First International Conference, INCT 2011, Tehran, Iran, December 13-15, 2011. Proceedings

Editors: Pit Pichappan, Hojat Ahmadi, Ezendu Ariwa

Publisher: Springer Berlin Heidelberg

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

This book constitutes the proceedings of the First International Conference on Innovative Computing Technology, INCT 2011, held in Tehran, Iran, in December 2011. The 40 revised papers included in this book were carefully reviewed and selected from 121 submissions. The contributions are organized in topical sections on software; Web services and service architecture; computational intelligence; data modeling; multimedia and image segmentation; natural language processing; networks; cluster computing; and discrete systems.

Table of Contents

Frontmatter

Software

Analysis of Quality Driven Software Architecture

This paper presents an analysis on quality driven approaches which embodies non-functional requirements into software architecture design. The analysis characterizes vocabularies and concepts of the area, with exhibiting a comparison of the two main techniques. In the first technique, architectural tactics are represented and their semantics is clearly defined as a UML-based pattern specification notation called RBML. Given a set of non-functional requirements, architectural tactics are selected and composed into an initial architecture for the application. The second technique designates some attribute primitives which are similar to architectural patterns. It then introduces a method called Attribute Driven Design, to involve attribute primitives for satisfying a set of general scenarios. In this analysis, we intend to give a brief description of the both approaches.

Ehsan Ataie, Marzieh Babaeian Jelodar, Fatemeh Aghaei
Using Power Spectral Density for Fault Diagnosis of Belt Conveyor Electromotor

This paper focuses on vibration-based condition monitoring and fault diagnosis of a belt conveyor electromotor by using Power spectral density (PSD). The objective of this research was to investigate the correlation between vibration analysis, PSD and fault diagnosis. Vibration data had regularly collected. We calculated G

rms

(Root-Mean-Square Acceleration)and PSD of Driven End (DE) and None Driven End (NDE) of an electromotor in healthy and unhealthy situations. The results showed that different situations showed different PSD vs. frequency. The results showed that with calculating PSD we could find some fault and diagnosis of belt conveyor electromotor as soon as possible. Vibration analysis and Power Spectral Density could provide quick and reliable information on the condition of the belt conveyor electromotor on different situations. Integration of vibration condition monitoring technique with Power Spectral Density analyze could indicate more understanding about diagnosis of the electromotor.

Hojjat Ahmadi, Zeinab Khaksar
Assessment of Watermelon Quality Using Vibration Spectra

Judging watermelon quality based on its apparent properties such as size or skin color is difficult. Traditional methods have various problems and limitations. In this paper a nondestructive method for quality watermelon test using laser Doppler vibrometery technology (LDV) have been presented which hasn’t some limitations. At first the sample was excited by a vibration generator in a frequency range. Applied vibration was measured using accelerometer attached in resting place of fruit. Synchronically vibrational response of fruit upside was detected by LDV. By means of a fast Fourier transform algorithm and considering response signal to excitation signal ratio, vibration spectra of fruit are analyzed and the first and second resonances were extracted. After nondestructive tests, watermelons were sensory evaluated. So the samples were graded in a range of ripeness by panel members in terms of overall acceptability (total desired traits consumers). Using two mentioned resonances as well as watermelon weight, a multivariate linear regression model to determine watermelon quality scores obtained. Correlation coefficient for calibration model was 0.82. For validation of model leave one out cross validation method was applied and r= 0.78 was achieved. Stepwise discriminant analysis was also used to classify ripe and unripe watermelons. The results showed 87.5% classification accuracy for original and cross validation cases. This study appeared utilization of this technique for watermelons sorting based on their costumer acceptability.

R. Abbaszadeh, A. Rajabipour, H. Ahmadi, M. Delshad, M. Mahjoob
Fault Diagnosis of Journal-Bearing of Generator Using Power Spectral Density and Fault Probability Distribution Function

Developing a special method for maintenance of equipments of industrial company is necessary for improving maintenance quality and reducing operating costs. Because of many vibration environments are not related to a specific driving frequency and may have input from multiple sources which may not be harmonically related, for more accurate and interest to analyze and test using random vibration. In this paper, for fault detection of generator journal-bearing using two technique of vibration analysis, namely, Power Spectral Density (PSD) and Fault Probability Distribution Function(PDF). For this we were calculated G

rms

,PDS and PDF of generator journal-bearing in healthy and unhealthy situation. The results showed that with calculating PSD and PDF we could find some fault of engine and diagnosis them possiblity.

Hojjat Ahmadi, Ashkan Moosavian

Web Services and Service Architecture

Design an Adaptive Competency-Based Learning Web Service According to IMS-LD Standard

Equal opportunities and the democratization of education, promoted by the establishment of the same content for all learners, can stigmatize and widen the differences and inequalities. Thus, the learner’s heterogeneity is inevitable and often regarded as unmanageable. Thus, customizing the environment to learners improve the learning process quality. There are several adaptation approaches of e-learning environment, such as; adaptive hypermedia system, semantic web, etc. In our proposed service, we adopt the competency based approach (CBA), and we consider that the adaptation relevance depends on the adequacy of the information collected through a personal diagnosis. This diagnostic takes place via an adaptive test using the Item Response Theory (IRT) in a formative perspective without trying to situate the learner in relation to others. This intelligent test, administered items in an appropriate order in a short time and produces relevant results. Thus learning system can lead the learner to gradually acquire a competency taking into account its needs and predispositions. The system will be implemented as an activity in a pedagogical scenario defined responding to the learner’s needs, while aligning with the norms and standards. Thus, some technical choices are required as far as standards and norms are concerned

Nour-eddine El Faddouli, Brahim El Falaki, Mohammed Khalidi Idrissi, Samir Bennani
Resolving Impassiveness in Service Oriented Architecture

In a Service Oriented Architecture, service registry called UDDI (Universal Description Discovery & Integration) is used as a database that includes description of published services. UDDI has an important defect called impassiveness that means the lack of consumer interaction with UDDI after he/she found some desired service. This means that consumers are not notified when some deletion or change of a service happens in UDDI. This paper aims to deal with resolving the problem of UDDI impassiveness by means of techniques of active database rules. To this end, we present new architecture for UDDI. In addition, the proposed architecture puts Web service invocation with toleration and includes the consumer classification.

Masoud Arabfard, Seyed Morteza Babamir

Computational Intelligence

A Neuro-IFS Intelligent System for Marketing Strategy Selection

The business intelligence (BI) provides businesses with the computational and quantitative support for decision making using artificial intelligence and data mining techniques. In this paper, we propose a neuro-IFS inference system for marketing strategy selection. For this purpose, first we develop an IFS inference system which operates on rules whose antecedents, including industry attractiveness and enterprise strength, have membership and non-membership functions. Then, we use a radial basis function (RBF) neural network on rules to enhance the proposed system with learning from the previous experiences. After 350 epochs, the 3-layer RBF neural network could distinguish the appropriate strategy with 95% accuracy rate.

Vahid Khatibi, Hossein Iranmanesh, Abbas Keramati
Prediction of Moisture Content of Bergamot Fruit during Thin-Layer Drying Using Artificial Neural Networks

In this study thin-layer drying of bergamot was modelled using artificial neural network. An experimental dryer was used. Thin-layer of bergamot slices at five air temperatures (40, 50, 60, 70 & 80 ºC), one thickness (6 mm) and three air velocities (0.5, 1 & 2 m/s) were artificially dried. Initial moisture content (M.C.) during all experiments was between 5.2 to 5.8 (g.g) (d.b.). Mass of samples were recorded and saved every 5 sec. using a digital balance connected to a PC. MLP with momentum and levenberg-marquardt (LM) were used to train the ANN

S

. In order to develop ANN’s models, temperatures, air velocity and time are used as input vectors and moisture ration as the output. Results showed a 3-8-1 topology for thickness of 6 mm, with LM algorithm and TANSIG activation function was able to predict moisture ratio with

R

2

of 0.99936. The corresponding MSE for this topology was 0.00006.

Mohammad Sharifi, Shahin Rafiee, Hojjat Ahmadi, Masoud Rezaee
An Expert System for Construction Sites Best Management Practices

The construction industry has the potential to significantly impact our environment. Using Best Management Practices (BMPs) at construction sites is the most effective way to protect our environment and prevent pollution. In recent years, intelligent systems have been used extensively in different applications areas including environmental studies. As an aid to reduce environmental pollution originating from construction activities, expert system software -CSBMP- developed by using Microsoft Visual Basic. CSBMP to be used for BMPs at construction sites was designed based on the legal process. CSBMP primarily aims to provide educational and support system for environmental engineers and decision-makers during construction activities. It displays system recommendations in report form.When the use of CSBMP in construction sites BMPs becomes widespread, it is highly possible that it will be benefited in terms of having more accurate and objective decisions on construction projects which are mainly focused on reducing the environmental pollution.

Leila Ooshaksaraie, Alireza Mardookhpour, Noor Ezlin Ahmad Basri, Azam Aghaee

Data Modeling

Target Tracking on Re-entry Based on an IWO Enhanced Particle Filter

Tracking a ballistic object on re-entry from radar observations is an extremely complex and intriguing problem among aerospace and signal processing experts. Since mathematical models for ballistic targets and sensors are subject to nonlinearity and uncertainty, conventional estimation methodologies cannot be utilized. In this study, a new meta-heuristic particle filtering (PF) strategy established upon the nature-inspired invasive weed optimization (IWO) is applied to the challenging re-entry target tracking problem. Firstly, the sampling step of PF is translated into a non-concave maximum likelihood optimization problem, and then the IWO algorithm is integrated. Subsequently, the PFIWO algorithm is applied to a benchmark re-entry target tracking problem. Results are given which demonstrate the proposed scheme has superior tracking performance in comparison with other nonlinear estimation techniques.

Mohamadreza Ahmadi, Mehrnoosh Shafati, Hamed Mojallali
Knowledge Discovery in Discrete Event Simulation Output Analysis

Simulation is a popular methodology for analyzing complex manufacturing environments. According to the large number of output of simulations, interpreting them seems impossible. In this paper we use an innovative methodology that combines simulation and data mining techniques to discover knowledge that can be derived from results of simulations. Data used in simulation process, are independent and identically distributed with a normal distribution, but the output data from simulations are often not i.i.d. normal. Therefore by finding associations between output data mining techniques can operate well. Analyzers change the sequences and values of input data according to the importance they have. These operations optimize the simulation output analysis. The methods presented here will of most interest to those analysts wishing to extract much information from their simulation models. The proposed approach has been implemented and run on a supply chain system simulation. The results show optimizations on analysis of simulation output of the mentioned system. Simulation results show high improvement in proposed approach.

Safiye Ghasemi, Mania Ghasemi, Mehrta Ghasemi
From UML State Machines to Verifiable Lotos Specifications

The theoretical prospect of Formal specification languages has been improved during last years. The pragmatic aspects of the formal methods has been scrutinized especially in the safety-critical systems. However there still remains a constant fear among industry practitioners to work with purely theoretical specification methods even though their software system operates in highly safety-critical applications.

We propose a hands-on approach in order to gradually transform popular UML 2.0 State Machines (SM) to verifiable Lotos specifications. In this algorithm the partial logical view of the system ,represented by the UML 2.0 SM, would be converted to verifiable Basic Lotos specification, it then may be developed as an executable program to mechanize the transformation.

Reza Babaee, Seyed Morteza Babamir
A Model Driven Approach for Personalizing Data Source Selection in Mediation Systems

Nowadays, there is a real trend to personalize mediation systems to improve user satisfaction. The mediator answers should be adapted to the user needs and preferences. In this paper, we propose a solution for this problem. Our solution is based on models for users and data sources. These models are used to perform a content matching and a quality matching between user profile and data sources profiles. The objective of matching is to rank sources according to user preferences and needs, and to select the most relevant ones. Our solution for personalizing data source selection provides the mediator with a set of relevant data sources. These are then involved in the rewriting process to give more satisfying response. By reducing the number of integrated data sources, the mediator performances are also optimized.

Imane Zaoui, Faouzia Wadjinny, Dalila Chiadmi, Laila Benhlima
Towards a Framework for Conceptual Modeling of ETL Processes

Data warehousing involves many moves of data from several sources into a central repository. Extraction-Transformations-Loading (ETL) processes are responsible for the extraction of data, their cleaning, conforming and loading into the target. It is widely recognized that building ETL processes, in a data warehouse project, are expensive regarding time and money. During the building phase, the most important and complex task is to achieve conceptual modeling of ETL processes. Several solutions have been proposed for this issue. In this paper, we present our approach, which is based on a framework for modeling ETL processes. Comparing with existent solutions, our approach has numerous strengths. Besides extensibility and reusability, it offers support and guideline to the designer. It has also the advantage to use a shorten notation, to design an ETL, consisting mainly on three components.

Ahmed Kabiri, Faouzia Wadjinny, Dalila Chiadmi
Dynamics Invariant Extension of Arrays in Daikon Like Tools

Software engineering comprises some processes such as designing, implementing and modifying of code. These processes are done to generate software fast and have a high quality, efficient and maintainable software. In order to perform these processes, invariants can useful and help programmers and testers. Arrays and pointers are frequent data types and are used in program code repeatedly. Because of this conventional use, these two data types can be the reason of fault in some program codes. First and last elements of arrays can confront to fault because of carelessness in using index in loops. Also arrays with the same type mostly have some relations which can be probably faulty. Therefore invariants which can report array and pointer properties are functional. This paper presented some constructive extension to Daikon like tools so that can produce more relevant invariants in the case of array.

Hani Fouladgar, Hamid Parvin, Hosein Alizadeh, Behrouz Minaei
Theoretical Feasibility of Conditional Invariant Detection

All software engineering process, which includes designing, implementing and modifying of software, are done to develop a software as fast as possible and also to reach a high quality, efficient and maintainable software. Invariants, as rather always true properties of program context, can help developers to do some aspect of software engineering more easily; therefore any improvement in extracting of more relevant invariant can help software engineering process. Conditional invariant is a novel kind of invariant which is turned in when some conditions are provided in program execution. Conditional invariant can exhibit program behavior much better. In order to extract this kind of invariants, it might be used some technique of data mining such as association rule mining or using decision tree to obtain rules. This paper spans feasibility of conditional invariant and advantageous of this kind of invariant compared to ordinary invariant.

Mohammad Hani Fouladgar, Hamid Parvin, Behrouz Minaei
Managing Network Dynamicity in a Vector Space Model for Semantic P2P Data Integration

P2P data integration is one of the prominent studies in recent years. It relies on two principal axes, including data integration and P2P computing. It aims to combine the advantages of data integration and P2P technologies to overcome centralized solutions shortcomings. However, dynamicity and large scale are the most difficult challenges faced for efficient solutions. In this paper, we investigate P2P computing and data integration fundamentals and detail the challenges that face the P2P data integration process. In addition, we presenta vector space model based approach our P2P semantic data integration framework. In a first stage, we detail the various modules of our framework and specify the functions of each one. Then, we present our vector space model to represent semantic knowledge. We present also the knowledge base components that hold semantic. Finally, we explain how we deal with network dynamicity and how semantic should be adjusted accordingly.

Ahmed Moujane, Dalila Chiadmi, Laila Benhlima, Faouzia Wadjinny
Affinity and Coherency Aware Multi-core Scheduling

Reducing the cost of program memory access can improve program performance. In this paper, a scheduling approach based on coherency and thread affinity has been introduced which is able to estimate scheduling cost according to the number of common data blocks and their coherency cost. The estimated results are used to find the appropriate thread mapping to cores so that the number of common data blocks between cores and their coherence cost are reduced. In the proposed model, the effect of shared cache size on affinity and coherency is considered. Since the shared cache behavior on different architectures is not the same and changes according to the cache size, stack distance analysis is used to estimate the behavior of shared cache on different architectures. Finally, the model is evaluated by a synthetic application and SPLASH-2 benchmark.

Hamid Reza Khaleghzadeh, Hossein Deldari
A New Clustering Ensemble Framework

In this paper a new criterion for clusters validation is proposed. This new cluster validation criterion is used to approximate the goodness of a cluster. The clusters which satisfy a threshold of the proposed measure are selected to participate in clustering ensemble. To combine the chosen clusters, some methods are employed as aggregators. Employing this new cluster validation criterion, the obtained ensemble is evaluated on some well-known and standard datasets. The empirical studies show promising results for the ensemble obtained using the proposed criterion comparing with the ensemble obtained using the standard clusters validation criterion. Besides to reach the best results, the method gives an algorithm based on which one can find how to select the best subset of clusters from a pool of clusters.

Hosein Alizadeh, Hamid Parvin, Mohsen Moshki, Behrouz Minaei

Multimedia and Image Segmentation

3D Machine Vision Employing Optical Phase Shift Moiré

The aim of this paper is 3D machine vision of surface based on optical phase shift Moiré. In measurement process we generate Moiré contours on a surface by using shadow moiré technique. The current work use phase shift analysis in order to increase the accuracy of measurement and extract the 3D profile of surface. In comparison with recent methods this technique is simple and easy to implement. In order to show validity and feasibility of this method, we apply it on a human face. The feedbacks indicate that this technique is low cost, simple and powerful method in 3D reconstruction of every surface without any disturbance.

Fatemeh Mohammadi, Amir Hossein Rezaie, Khosro Madanipour
Farsi Font Recognition Using Holes of Letters and Horizontal Projection Profile

In spite of important role of font recognition in document image analysis, only a few researchers have addressed the issue. This work presents a new approach for font recognition of Farsi document images. In this approach using two types of features, font and font size of Farsi document images are recognized. The first feature is related to holes of letters of text of document image. The second feature is related to horizontal projection profile of text lines of document image. This approach has been applied on 7 widely used Farsi fonts and 7 font sizes. A dataset of 10*49 images and another dataset of 110 images were used for testing and recognition rate more than 93.7% obtained. Images have been made using paint software and are noiseless and without skew. This approach is fast and is applicable for other languages that are similar to Farsi, such as Arabic language.

Yaghoub Pourasad, Houshang Hassibi, Azam Ghorbani
Gaze Interaction – A Challenge for Inclusive Design

Gaze interaction for many people is the only means of communication because of extremely limited conditions like traumatic brain injuries, cerebral palsy to multiple sclerosis. No doubt it holds great undertake of the disable people while the ‘design for all slogans’ is highly supported by this feature. However, on the other hand people those who do not need such special need are intentionally excluded from using gaze technology even though a lot of promising research is being done in this field. There are several limitations and at present there is no model which can guide towards the design of sustainable, stable, eye tracking system for majority people. This paper examines such limitations of gaze interactions and proposes an accessibility passport model to overcome the challenges, thereby opening opportunity better design of gaze interaction for achieving universal and inclusive design.

Moyen Mustaquim
Hierarchical Key-Frame Based Video Shot Clustering Using Generalized Trace Kernel

In this paper, we propose a new generalized trace kernel for measuring the similarity between data points of matrices form which have the same number of rows and different number of columns. Also, we propose a hierarchical clustering algorithm based on this kernel function. The clustering algorithm has been utilized in a video indexing system to cluster video shots. The experimental results on TRECVID 2006 data set confirm the effectiveness of the proposed kernel function and clustering algorithm.

Ali Amiri, Neda Abdollahi, Mohammad Jafari, Mahmood Fathy
Improving Face Recognition Based on Characteristic Points of Face Using Fuzzy Interface System

Three main cases that often are considered for identifying face figures are: happy, sad, and surprised. Face states are created by changes in different points. In this article, first eight characteristic points of face are considered and then five different features are extracted from them that these features form a feature vector for each of the face state. Then, we get a rules database based on these features and with fuzzy inference systems and considering the membership function, a method is presented for identifying happiness and sadness, and surprise states. Three important advantages Compared with other available methods are that it has less number of feature points and features and it has a higher accuracy than other methods.

Mohammadreza Banan, Alireza Soleimany, Esmaeil Zeinali Kh, Akbar Talash
Modes Detection of Color Histogram and Merging Algorithm by Mode Adjacency Graph Analysis for Color Image Segmentation

In this work we present an approach for color image segmentation based on pixel classification. Such methods are based on the assumption that meaningful regions are defined by homogeneous colors and give rise to compact clusters in the color space. Each cluster defined a class of pixels which share similar color properties The construction of the pixel classes is performed by detecting the modes of the color histogram of the image. To identify these modes, mathematical morphology techniques are used. The application of watersheds on the color histogram leads to an over partitioning of the color plane, which can be processed by mode merging algorithms based on mode adjacency graph analysing. Depending the merging criterion we present in this paper two merging algorithms, the first relies on the gravity centers of the modes as a merging criterion, and in the second we introduce a new merging criterion: the spatial-color compactness degree.

Halima Remmach, Aziza Mouradi, Abderrahmane Sbihi, Ludovic Macaire, Olivier Losson
Video Stabilization and Completion Using the Scale-Invariant Features and RANSAC Robust Estimator

Video stabilization is an important video enhancement process which attempts to remove unwanted vibrations from the video frames. Software solutions to this problem consist of three main stages namely "motion estimation", "motion smoothing and correction" and "frames completion". In motion estimation, a global motion model is determined by extracting a set of feature points within frames and matching them in neighboring frames. We use the Scale Invariant feature and RANSAC robust estimator for acquiring the motion parameters. The effect of high frequency components which are related to the unwanted vibrations are then removed using a spatio-temporal Gaussian lowpass filter. A modified mosaicing algorithm is finally applied in order to complete the undefined regions resulted from motion correction. In our modified mosaicing algorithm, considering the original unstabilized neighboring frames and their associated motion models, the value of an undefined pixel is determined by minimizing the distance between its nearest defined pixels and the corresponding pixels in the neighboring frames.

Moones Rasti, Mohammad Taghi Sadeghi

Natural Language Processing

Improvement in Automatic Classification of Persian Documents by Means of Support Vector Machine and Representative Vector

Representative Vector is a kind of Vector which includes related words and the degree of their relationships. In this paper the effect of using this kind of Vector on automatic classification of Persian documents is examined. In this method, preprocessed documents, extra words as well as word stems are at first found. Next, through one of the known ways, some features are extracted for each category. Then, the Representative Vector, which is made based on the elicited features, leads to some more detailed words which are better Representatives for each category. Findings of the experiments show that Precision and Recall can be increased significantly by extra words omission and addition of few words in the Representative Vectors as well as the use of a famous classification model like Support Vector Machine (SVM).

Jafari Ashkan, Ezadi Hamed, Hossennejad Mihan, Noohi Taher
Recognition of Isolated Handwritten Persian Characterizing Hamming Network

In this paper we propose a system for recognition of isolated handwritten Persian characters. A novel method that uses derivation has been used for feature extraction. Hamming network has been used for classification in this system. Hamming network is a neural network fully connected from input layer to all neuron in output layer which calculate amount of resemblance between input patterns than training patterns. The training and test patterns were gathered from dataset over 47965 patterns. The 32 characters in Persian language were categorized into 9 different classes which characters of each class are very similar to each other’s. The Classification rate with this approach is about 95 percent and Recognition rate in each class is about 90 percent. The results show an increment in recognition rates in comparison with our previous work.

Masoud Arabfard, Meisam Askari, Milad Asadi, Hosein Ebrahimpour-Komleh
Dual Particle-Number RBPF for Speech Enhancement

In this paper, we propose a new single channel dual particle-number Rao-Blackwellized particle filter (RBPF). Additive noise i.e. white and color noises corrupt speech signal and degrade its intelligibility and quality. Quality measurement scores are ITU-T P.862.1 (PESQ), also computation cost in implementation are important. Particular emphasis is placed on the removal of colored noise, such as industrial noise. At first describe some of the similar method such as Kalman filter and particle filter. The simulation results show that the proposed method provides a significant gain in ITU-T P.862.1 score. Taking measure to reduce computational complexity by separating silent-speech and assign different particle number to each type of frames.

Seyed Farid Mousavipour, Saeed Seyedtabaii

Networks

Exploring Congestion-Aware Methods for Distributing Traffic in On-Chip Networks

The performance of NoC is highly affected by the network congestion condition. Congestion in the network can increase the delay of packets to be routed between sources and destinations, so it should be avoided. The routing decision can be based on local or non-local congestion information. Methods based on local congestion condition are generally simple but they are unable to balance the traffic load efficiently. On the other hand, methods using non-local congestion information are more complex while providing better distribution of traffic over the network. In this paper, we explored several proposed locally and non-locally congestion-aware methods. Then we discussed about their advantages and disadvantages. Finally, we compared the methods with each other regarding the latency metric.

Masoumeh Ebrahimi, Masoud Daneshtalab, Pasi Liljeberg, Juha Plosila, Hannu Tenhunen
A Model for Traffic Prediction in Wireless Ad-Hoc Networks

In recent years, Wireless Ad-hoc networks have been considered as one of the most important technologies. The application domains of Wireless Ad-hoc Networks gain more and more importance in many areas. One of them is controlling and management the packet traffic. In this paper our goal is controlling the performance of every sections of pipeline of the factory by checking network periodically. Along the factory the traffic is modeled with a Poisson process. We present, with obtaining traffic packets at time (t) for each node in Wireless Ad-hoc Network, we can completely train a Neural Network and successfully predict the traffic at time (t+1) for each node. By this way we can recognize the inefficient sections in factory and try to fix it. The results of experiment have shown that proposed model has acceptable performance.

Mahsa Torkamanian Afshar, M. T. Manzuri, Nasim Latifi
Predicating the Location of Nodes in Ad Hoc Network by Lazy Learning Method

Node position information is one of the important issues in many ad hoc network usages. In many ad hoc networks such as military or mobile sensor networks one or more central nodes need to know the location of all the nodes. In addition, in some routing protocols especially location aware protocols (LAR), the nodes should have the location of each others. Therefore, in some ad hoc network application, knowing the location of nodes is considered, but propagating the nodes position information in the network is the big challenge in ad hoc networks. Since, increasing the number of nodes lead to increase the traffic of the network exponentially. In this paper, we have applied special learning method to predict the node location. Thus, nodes don’t need to propagate their location information regularly. By this method we can reduce the traffic overhead of network that increase the network’s scalability.

Mohammad Jafari, Neda Abdollahi, Hossein Mohammadi
Advanced Dynamic Bayesian Network Optimization Model Applied in Decomposition Grid Task Scheduling

This paper uses Bayesian optimization algorithm and decomposition approach for solving task scheduling problem in probabilistic grid computing systems to overcome the efficiency problem since it belongs to NP-complete problems. This paper introduces a Bayesian Optimization model that combines Dynamic Bayesian networks to manage uncertainty and evolutionary algorithms to solve the problem. Dynamic Bayesian networks use because the performance, reliability and cost of resources vary with time simultaneously and their availability is uncertain. This method decomposes the global problem to make the scheduling process simpler and achieve the QoS objectives efficiently. Instead of sending the jobs to the all resources, some local areas of resources with a controller consider and send the jobs to them. With the use of GridSim toolkit it will be proven that this model cause to achieve the QoS objectives such as minimizing the cost and time more efficiently.

Leily Mohammad Khanli, Sahar Namyar

Cluster Computing

An Efficient Live Process Migration Approach for High Performance Cluster Computing Systems

High performance cluster computing systems have used process migration to balance the workload on their constituent computers and thus improve their overall throughput and performance. They however fail to migrate processes lively in the sense that moving processes are blocked (frozen) and are non-responsive to any requests sent to them while they are moving to their new destinations and have not reached and resumed their work on their new destinations. Previous efforts to prevent losing requests during process migration have been inefficient. We present a more efficient approach that keeps migrating processes live and responsive to requests during their journey to their new destinations. To achieve this, we have added a new state called the exile state to the traditional state model of processes in operating systems. A migratory process changes its status to the exile state before starting to migrate. All requests to the migratory process are executed locally on the old location of the process until the process reaches its destination computer and resumes its work anew. We show that our approach improves the performance of clusters supporting process migration by decreasing freeze time.

Ehsan Mousavi Khaneghah, Najmeh Osouli Nezhad, Seyedeh Leili Mirtaheri, Mohsen Sharifi, Ashakan Shirpour
Local Robustness: A Process Migration Criterion in HPC Clusters

Cluster computing systems require managing their resources and running processes dynamically in an efficient manner. Preemptive process migration is such a mechanism that tries to improve the overall performance of a cluster system running independent processes. In this paper, we show that blind migration of processes at runtime by such a mechanism does not lead to better performance. Instead, the preemptive process migration mechanism requires a criterion to determine if the migration of a process would enhance the cluster performance or not. We introduce a criterion called

local robustness

to guide the mechanism in this respect. The results of our experiments on a real implementation of a mechanism using this criterion have shown improvements to the overall performance of a Mosix cluster in terms of system response time compared to when processes were migrated blindly.

Sina Mahmoodi Khorandi, Seyedeh Leili Mirtaheri, Ehsan Mousavi Khaneghah, Mohsen Sharifi, Siavash Ghiasvand
Data Clustering Using Big Bang–Big Crunch Algorithm

The Big Bang–Big Crunch (BB–BC) algorithm is a new optimization method that is based on one of the theories of the evolution of the universe namely the Big Bang and Big Crunch theory. According to this method, in the Big Bang phase some candidate solutions to the optimization problem are randomly generated and spread all over the search space. In the Big Crunch phase, randomly distributed candidate solutions are drawn into a single representative point via a center of population or minimal cost approach. This paper presents BB-BC based novel approach for data clustering. The simulation results indicate the applicability and potential of this algorithm on data clustering.

Abdolreza Hatamlou, Salwani Abdullah, Masumeh Hatamlou

Discrete Systems

Improvement of the Performance of QEA Using the History of Search Process and Backbone Structure of Landscape

In order to improve the exploration ability of Quantum Evolutionary Algorithm (QEA) and helping the algorithm to escape from local optima, this paper proposes a novel operator which uses the history of search process during the previous iterations to lead the q-individuals toward better parts of the search space. In the proposed method, in each iteration the history of the solutions is stored in a set called the history set. The history of solutions contains some information about the fitness landscape and the structure of better and worse solutions. This paper proposes a new operator which exploits this information to make a figure about the backbone structure of the fitness landscape and lead the q-individuals to search better parts of the search space. The proposed algorithm is tested on Knapsack Problem, Trap Problem, Max-3-Sat Problem and 13 Numerical Benchmark functions. Experimental results show better performance for the proposed algorithm than the original version of QEA.

M. H. Tayarani N., M. Beheshti, J. Sabet, M. Mobasher, H. Joneid
A New Initialization Method and a New Update Operator for Quantum Evolutionary Algorithms in Solving Fractal Image Compression

Fractal Image Compression (FIC) problem is a combinatorial problem which has recently become one of the most promising encoding technologies in the generation of image compression. While Quantum Evolutionary Algorithm (QEA) is a novel optimization algorithm proposed for class of combinatorial optimization problems, it is not widely used in Fractal Image Compression problem yet. Using statistical information of range and domain blocks, and a novel magnetic update operator, this paper proposes a new algorithm in solving FIC. The statistical information of domain and range blocks is used in the initialization step of QEA. In the proposed update operator the q-individuals are some magnetic particles applying attractive force to each other. The force two particles apply to each other depends on their fitness and their distance. The proposed algorithm is tested on several images and the experimental results show better performance for the proposed algorithm than QEA and GA. In comparison with the full search algorithm, the proposed algorithm reaches comparable results with much less computational complexity.

M. H. Tayarani N., M. Beheshti, J. Sabet, M. Mobasher, H. Joneid
Partial and Random Updating Weights in Error Back Propagation Algorithm

One of the introduce discussions in the field of using MLPNN, as a tools for data classification is related to the error back propagation algorithm which use to train the network. It has challenges for large-scale and heterogeneous data such as, lack of memory and low–speed convergence, besides, computational load is high. In this paper proposed method with partial and random updating some of weights instead of all of them in each iteration, cause to decrease computational rate, improve lack of memory’s problem and somewhat increase convergence speed. Result of experiments on two standard dataset, demonstrate efficiency of algorithm.

Nasim Latifi, Ali Amiri
A Distributed Algorithm for γ-Quasi-Clique Extractions in Massive Graphs

In this paper, we investigate the challenge of increasing the size of graphs for finding

γ

-quasi-cliques. We propose an algorithm based on MapReduce programming model. In the proposed solution, we use some known techniques to prune unnecessary and inefficient parts of search space and divides the massive input graph into smaller parts. Then the data for processing each part is sent to a single computer. The evaluation shows that we can substantially reduce the time for large graphs and besides there is no limit for graph size in our algorithm.

Arash Khosraviani, Mohsen Sharifi
Backmatter
Metadata
Title
Innovative Computing Technology
Editors
Pit Pichappan
Hojat Ahmadi
Ezendu Ariwa
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-27337-7
Print ISBN
978-3-642-27336-0
DOI
https://doi.org/10.1007/978-3-642-27337-7

Premium Partner