Skip to main content

Über dieses Buch

This book contains cutting-edge research material presented by researchers, engineers, developers, and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security and Computational Models (ICC3) organized by PSG College of Technology, Coimbatore, India during December 19–21, 2013. The materials in the book include theory and applications to provide design, analysis, and modeling of the key areas. The book will be useful material for students, researchers, professionals, as well academicians in understanding current research trends and findings and future scope of research in computational intelligence, cyber security, and computational models.



Keynote Address


The Robber Strikes Back

We consider the new game of Cops and Attacking Robbers, which is identical to the usual Cops and Robbers game except that if the robber moves to a vertex containing a single cop, then that cop is removed from the game. We study the minimum number of cops needed to capture a robber on a graph


, written cc(


). We give bounds on cc(


) in terms of the cop number of


in the classes of bipartite graphs and diameter two,





-free graphs.

Anthony Bonato, Stephen Finbow, Przemysław Gordinowicz, Ali Haidar, William B. Kinnersley, Dieter Mitsche, Paweł Prałat, Ladislav Stacho

Some Applications of Collective Learning

Much of the real-world data have complex dependencies between the individual tuples. For example, the chance that a patient has a particular disease depends on the prevalence of the disease in the immediate neighborhood. One approach to handling such linked data is “collective learning.” In collective learning, one deals with a set of data points taken at a time. The dependencies between the data points are modeled as a graph, with the nodes representing the tuples and the edges between them representing the influence of the tuples on one another. A variety of domains lend themselves naturally to such graph-based modeling. There have been a variety of collective learning and inferencing approaches that have been proposed in the literature. In this talk, I will give a brief introduction to collective learning and describe two applications.

Balaraman Ravindran

Subconscious Social Computational Intelligence

The success of social network Web services mediating social interactions, as well as the increasing observation capabilities of human interactions in real life, has prompted the emergence of new computational paradigms, namely social computing, computational social science, and social intelligence. Subconscious social intelligence appears when the social network service is able to provide solutions, generated by a hidden intelligent layer, to problems posed by the social player. This paper discusses some features of subconscious social intelligence and ensuing challenges for machine learning systems implementing the hidden intelligent layer.

M. Graña

Modeling Heavy Tails in Traffic Sources for Network Performance Evaluation

Heavy tails in work loads (file sizes, flow lengths, service times, etc.) have significant negative impact on the performance of queues and networks. In the context of the famous Internet file size data of Crovella and some very recent data sets from a wireless mobility network, we examine the new class of LogPH distributions introduced by Ramaswami for modeling heavy-tailed random variables. The fits obtained are validated using separate training and test data sets and also in terms of the ability of the model to predict performance measures accurately as compared with a trace-driven simulation using NS-2 of a bottleneck Internet link running a TCP protocol. The use of the LogPH class is motivated by the fact that these distributions have a power law tail and can approximate any distribution arbitrarily closely not just in the tail but in its entire range. In many practical contexts, although the tail exerts significant effect on performance measures, the bulk of the data is in the head of the distribution. Our results based on a comparison of the LogPH fit with other classical model fits such as Pareto, Weibull, LogNormal, and Log-


demonstrate the greater accuracy achievable by the use of LogPH distributions and also confirm the importance of modeling the distribution in its entire range and not just in the tail.

Vaidyanathan Ramaswami, Kaustubh Jain, Rittwik Jana, Vaneet Aggarwal

The Future of Access Control: Attributes, Automation, and Adaptation

Access control has been and will always be one of the center pieces of cyber security. This talk will focus on three necessary characteristics of access control in future systems: attributes, automation, and adaptation. Future access control policies will be built around attributes, which are properties of relevant entities, so they can apply to large numbers of entities while being fine-grained at the same time. This transition to attribute-based access control has been in process for about two decades and is approaching a major inflection point. Automation and adaptation, however, are newer concepts. Automation seeks to break away from requiring human users to configure access control policies, by delegating more of the routine tasks to smart software. Adaptation recognizes that access control must adjust as circumstances change. This talk will speculate on a future built around these three synergistic elements and on the research and technology challenges in making this vision a reality.

Ravi Sandhu

Optimal Control for an M X /G/1/N + 1 Queue with Two Service Modes

A finite-buffer queueing model is considered with batch Poisson input and controllable service rate. A batch that upon arrival does not fit in the unoccupied places of the buffer is partially rejected. A decision to change the service mode can be made at service completion epochs only, and vacation (switch-over) times are involved in preparing the new mode. During a switch-over time, service is disabled. For the control of this model, three optimization criteria are considered: the average number of jobs in the buffer, the fraction of lost jobs, and the fraction of batches not fully accepted. Using Markov decision theory, the optimal switching policy can be determined for any of these criteria by the value-iteration algorithm. In the calculation of the expected one-step costs and the transition probabilities, an essential role is played by the discrete fast Fourier transform.

Rein D. Nobel, Adriaan A. N. Ridder

Computational Intelligence


A Novel Approach to Gene Selection of Leukemia Dataset Using Different Clustering Methods

Gene datasets from microarray comprise large number of genes. Clustering is a widely used approach for grouping similar kind of genes. The main objective of this paper is to identify the optimal subset of genes from the leukemia dataset in order to classify the leukemia cancer. Different clustering approaches such as


-means (KM) clustering, fuzzy


-means (FCM) clustering, and modified


-means (MKM) clustering have been adopted in this research. The clusters obtained from these methods are further clustered using


-means sample-wise (by omitting class values), and the results are compared with ground truth value to evaluate the performance of the different clustering methods. The highly correlated genes are selected from the cluster that produces more accurate classification results. It is observed that the FCM (gene-wise clustering) with


-means (sample-wise clustering) produces better accuracy, and the resultant genes have been identified.

P. Prasath, K. Perumal, K. Thangavel, R. Manavalan

A Study on Enhancing Network Stability in VANET with Energy Conservation of Roadside Infrastructures Using LPMA Agent

Designing an intelligent transportation system (ITS) for vehicular ad hoc networks (VANETs) with seamless connectivity and appropriate energy conservation of roadside units (RSU) that takes part in communication between vehicular nodes is a challenging task because of the high mobility or dynamic nature of these networks. The nodes in the VANET whether stationary or moving are limited in terms of storage capacity, reliability, and energy. The main concern about these networks is whether the existing protocols meet the demands of feasibility of this high mobility VANETs and make them less dynamic or stable. In this paper, we discuss a clustering-based architecture designed with mobile agents and a combination of MH-LEACH and power-efficient gathering in sensor information systems (PEGASIS) protocols for building stability in VANET for seamless communication along with energy conservation of other RSU to improve quality of service.

T. Karthikeyan, N. Sudha Bhuvaneswari

An Intuitionistic Fuzzy Approach to Fuzzy Clustering of Numerical Dataset

Fuzzy c-means (FCM) clustering is one of the most widely used fuzzy clustering algorithms. However, the main disadvantage of this algorithm is its sensitivity to noise and outliers. Intuitionistic fuzzy set is a suitable tool to cope with imperfectly defined facts and data, as well as with imprecise knowledge. So far, there exists a little investigation on FCM algorithm for clustering intuitionistic fuzzy data. This paper focuses mainly on two aspects. Firstly, it proposes an intuitionistic fuzzy representation (IFR) scheme for numerical dataset and applies the modified FCM clustering for clustering intuitionistic fuzzy (IF) data and comparing results with that of crisp and fuzzy data. Secondly, in clustering of IF data, different IF similarity measures are studied and a comparative analysis is carried out on the results. The experiments are conducted for numerical datasets of UCI machine learning data repository.

N. Karthikeyani Visalakshi, S. Parvathavarthini, K. Thangavel

ELM-Based Ensemble Classifier for Gas Sensor Array Drift Dataset

Much work has been done on classification for the past fifteen years to develop adapted techniques and robust algorithms. The problem of data correction in the presence of simultaneous sources of drift, other than sensor drift, should also be investigated, since it is often the case in practical situations. ELM is a competitive machine learning technique, which has been applied in different domains for classification. In this paper, ELM with different activation functions has been implemented for gas sensor array drift dataset. The experimental results show that the ELM with bipolar function classifies the drift dataset with an average accuracy of 96 % than the other function. The proposed method is compared with SVM.

D. Arul Pon Daniel, K. Thangavel, R. Manavalan, R. Subash Chandra Boss

Hippocampus Atrophy Detection Using Hybrid Semantic Categorization

Medical image analysis plays a vital role in the diagnosis and prognosis of brain-related diseases. MR images are often preferred for brain anatomy analysis for their high resolution. In this work, the components of the brain are analyzed to identify and locate the region of interest (hippocampus). The internal structures of the brain are segmented via the combination of wavelet and watershed approach. The segmented regions are categorized through semantic categorization. The region of interest is identified and cropped, and periodical volume analysis is performed to identify the atrophy. The atrophy detection of the proposed system is found to be more effective than the identification done by the traditional system of radiologist. Performance measures such as sensitivity, specificity, and accuracy are used to evaluate the system.

K. Selva Bhuvaneswari, P. Geetha

Hyper-Quadtree-Based K-Means Algorithm for Software Fault Prediction

Software faults are recoverable errors in a program that occur due to the programming errors. Software fault prediction is subject to problems like non-availability of fault data which makes the application of supervised technique difficult. In such cases, unsupervised techniques are helpful. In this paper, a hyper-quadtree-based K-means algorithm has been applied for predicting the faults in the program module. This paper contains two parts. First, the hyper-quadtree is applied on the software fault prediction dataset for the initialization of the K-means clustering algorithm. An input parameter Δ governs the initial number of clusters and cluster centers. Second, the cluster centers and the number of cluster centers obtained from the initialization algorithm are used as the input for the K-means clustering algorithm for predicting the faults in the software modules. The overall error rate of this prediction approach is compared with the other existing algorithms.

Rakhi Sasidharan, Padmamala Sriram

Measurement of Volume of Urinary Bladder by Implementing Localizing Region-Based Active Contour Segmentation Method

Ultrasound has become increasingly important in medicine and has now taken its place along with X-ray and nuclear medicine as a diagnostic tool. Its main attraction as an imaging modality lies in its non-invasive characteristic and ability to distinguish interfaces between soft tissues. Diagnostic ultrasound can be used to find out the cyst and tumors in the abdominal organs. Considering the importance of measurement of volume of the urinary bladder using diagnostic ultrasound imaging, an image processing technique of edge-based image segmentation has been employed. The technique discussed in this paper deals with a method for automatic edge-based image segmentation of the urinary bladder using localized region-based active contour method from a 2D ultrasound image for finding the area and volume of the urinary bladder accurately. The study of area and volume would provide valuable information about the abnormalities of the bladder and also the extent of abnormality. Experimental results show good performance of the proposed model in segmenting urinary bladder to measure its exact area and volume.

B. Padmapriya, T. Kesavamurthi, B. Abinaya, P. Hemanthini

Improved Bijective-Soft-Set-Based Classification for Gene Expression Data

One of the important problems in using gene expression profiles to forecast cancer is how to effectively select a few useful genes to build exact models from large amount of genes. Classification is also a major issue in data mining. The classification difficulties in medical area often classify medical dataset based on the outcomes of medical analysis or report of medical action by the medical practitioner. In this study, a prediction model is proposed for the classification of cancer based on gene expression profiles. Feature selection also plays a vital role in cancer classification. Feature selection techniques can be used to extract the marker genes to improve classification accuracy efficiently by removing the unwanted noisy and redundant genes. The proposed study discusses the bijective-soft-set-based classification method for gene expression data of three different cancers, which are breast cancer, lung cancer, and leukemia cancer. The proposed algorithm is also compared with fuzzy-soft-set-based classification algorithms, fuzzy KNN, and k-nearest neighbor approach. Comparative analysis of the proposed approach shows good accuracy over other methods.

S. Udhaya Kumar, H. Hannah Inbarani, S. Senthil Kumar

Mammogram Image Classification Using Rough Neural Network

Breast cancer is the second leading cause of cancer deaths in women, and it is the most common type of cancer prevalent among women. Detecting tumor using mammogram is a difficult task because of complexity in the image. This brings the necessity of creating automatic tools to find whether a tumor is present or not. In this paper, rough set theory (RST) is integrated with back-propagation network (BPN) to classify digital mammogram images. Basically, RST is used to handle more uncertain data. Mammogram images are acquired from MIAS database. Artifacts and labels are removed using vertical and horizontal sweeping method. RST has also been used to remove pectoral muscles and segmentation. Features are extracted from the segmented mammogram image using GLCM, GLDM, SRDM, NGLCM, and GLRM. Then, the features are normalized, discretized, and then reduced using RST. After that, the classification is performed using RNN. The experimental results show that the RNN performs better than BPN in terms of classification accuracy.

K. T. Rajakeerthana, C. Velayutham, K. Thangavel

Performance Assessment of Kernel-Based Clustering

Kernel methods are ones that, by replacing the inner product with positive definite function, implicitly perform a nonlinear mapping of input data into a high-dimensional feature space. Various types of kernel-based clustering methods have been studied so far by many researchers, where Gaussian kernel, in particular, has been found to be useful. In this paper, we have investigated the role of kernel function in clustering and incorporated different kernel functions. We discussed numerical results in which different kernel functions are applied to kernel-based hybrid c-means clustering. Various synthetic data sets and real-life data set are used for analysis. Experiments results show that there exist other robust kernel functions which hold like Gaussian kernel.

Meena Tushir, Smriti Srivastava

Registration of Ultrasound Liver Images Using Mutual Information Technique

Registration of medical images is done to investigate the disease process and understand normal development and aging. Image registration is the process of transforming different sets of medical image data into one coordinate system. Mutual information (MI) is a popular similarity measure for medical image registration. The preprocessing step for the registration process is done by means of two successive filters namely speckle reduction by anisotropic diffusion (SRAD) filter and median filter called S-mean filter. This work focuses on the registration of ultrasound liver images, and comparison is done by means of optimization techniques using MI, to bring utmost accuracy in computation time, and is very well suited for clinical applications.

R. Suganya, R. Kirubakaran, S. Rajaram

Sentiment Mining Using SVM-Based Hybrid Classification Model

With the rapid growth of social networks, opinions expressed in social networks play an influential role in day-to-day life. A need for a sentiment mining model arises, so as to enable the retrieval of opinions for decision making. Though support vector machine (SVM) has been proved to provide a good classification result in sentiment mining, the practically implemented SVM is often far from the theoretically expected level because their implementations are based on the approximated algorithms due to the high complexity of time and space. To improve the limited classification performance of the real SVM, we propose to use the hybrid model of SVM and principal component analysis (PCA). In this paper, we apply the concept of reducing the data dimensionality using PCA to decrease the complexity of an SVM-based sentiment classification task. The experimental results for the product reviews show that the proposed hybrid model of SVM with PCA outperforms a single SVM in terms of classification accuracy and receiver-operating characteristic curve (ROC).

G. Vinodhini, R. M. Chandrasekaran

Shape Based Image Classification and Retrieval with Multiresolution Enhanced Orthogonal Polynomial Model

This paper proposes a simple edge-based shape representation with multiresolution enhanced orthogonal polynomial model and morphological operations for effective classification and retrieval. The proposed method consists of four phases: (1) orthogonal polynomial computation, (2) edge image construction, (3) approximate shape boundary extraction with morphological operation, and (4) invariant Hu’s moment computation. Initially, the orthogonal polynomials are computed and the obtained coefficients are reordered into one-level subband-like structure. Then, the edge image is obtained by utilizing gradient in horizontal and vertical directions from the detailed subbands of the reordered structure. The rough shape boundary is computed with morphological operation. The global invariant shape features such as Hu’s moment and eccentricity are extracted in the fourth phase. The obtained features are termed as global shape feature vector and are used for retrieving and classifying similar images with Canberra distance metric and Bayesian classification, respectively. The efficiency of the proposed method is experimented on a subset of standard Corel, Yale, and MPEG-7 databases. The results of the proposed method are compared with those of existing techniques, and the proposed method provides significant results.

S. Sathiya Devi, R. Krishnamoorthi

Using Fuzzy Logic for Product Matching

Product matching is a special type of entity matching, and it is used to identify similar products and merging products based on their attributes. Product attributes are not always crisp values and may take values from a fuzzy domain. The attributes with fuzzy data values are mapped to fuzzy sets by associating appropriate membership degree to the attribute values. The crisp data values are fuzzified to fuzzy sets based on the linguistic terms associated with the attribute domain. Recently, matching dependencies (MDs) are used to define matching rules for entity matching. In this study, MDs defined with fuzzy attributes are extracted from product offers and are used as matching rules. Matching rules can aid product matching techniques in identifying the key attributes for matching. The proposed solution is applied on a specific problem of product matching, and the results show that the matching rules improve matching accuracy.

K. Amshakala, R. Nedunchezhian

Cyber Security


A Novel Non-repudiate Scheme with Voice FeatureMarking

A digital watermark is the type of latent indicator secretly embedded in a noise-tolerant signal such as audio or image data. It is typically used to identify the ownership or copyright of material. “Watermarking” is the process of hiding digital information in a carrier signal in order to confirm the authenticity or integrity of the carrier signal as well as show the identity of its owners. Since a digital copy of data is the same as the original, digital watermarking is a passive protection tool. This technique simply marks the signal with the data neither it degrades nor it controls access to the data, thereby securing the communication. The proposed system introduces a novel non-repudiate scheme to ensure the ownership of every audio communication. This method embeds the prepared watermark in the transform domain of the audio signal using the fast Walsh transforms. Watermark used in this technique is unique for each member, and thus, it provides additional authenticity in every communication compared to state of the art.

A. R. Remya, M. H. Supriya, A. Sreekumar

A Study of Spam Detection Algorithm on Social Media Networks

In the present situation, the issue of identifying spammers has received increasing attention because of its practical relevance in the field of social network analysis. The growing popularity of social networking sites has made them prime targets for spammers. By allowing users to publicize and share their independently generated content, online social networks become susceptible to different types of malicious and opportunistic user actions. Social network community users are fed with irrelevant information while surfing, due to spammer’s activity. Spam pervades any information system such as email or Web, social, blog, or reviews platform. Therefore, this paper attempts to review various spam detection frameworks that which deal about the detection and elimination of spams in various sources.

Jacob Soman Saini

Botnets: A Study and Analysis

Botnets are an emerging phenomenon that is becoming one of the most significant threats to cyber security and cyber crimes as they provide a distributed platform for several illegal activities such as launching distributed denial of service (DDoS), malware dissemination, phishing, identity theft, spamming, and click fraud. The characteristic of botnets is the use of command and control (C&C) channels through which they can be updated and directed. We investigate the state-of-art research on recent botnet analysis. This paper aims to provide a concise overview of existing botnets in multiple views. The major advantage of this paper is to identify the nature of the botnet problem and find the specific ways of detecting the botnet.

G. Kirubavathi, R. Anitha

Comparative Study of Two- and Multi-Class-Classification-Based Detection of Malicious Executables Using Soft Computing Techniques on Exhaustive Feature Set

Detection of malware using soft computing methods has been explored extensively by many malware researchers to enable fast and infallible detection of newly released malware. In this work, we did a comparative study of two- and multi-class-classification-based detection of malicious executables using soft computing techniques on exhaustive feature set. During this comparative study, a rigorous analysis of static features, extracted from benign and malicious files, was conducted. For the analysis purpose, a generic framework was devised and is presented in this paper. Reference dataset (RDS) from National software reference library (NSRL) was explored in this study as a mean for filtering out benign files during analysis. Finally, through well-corroborated experiments, it is shown that AdaBoost, when combined with algorithms such as C4.5 and random forest with two-class classification, outperforms many other soft-computing-based techniques.

Shina Sheen, R. Karthik, R. Anitha

CRHA: An Efficient HA Revitalization to Sustain Binding Information of Home Agent in MIPv6 Network

Home agents (HAs) maintain the binding information of mobile node (MN). The binding cache of HA stores the associated data of MN. It represents a single point of failure in Mobile IPv6 networks. An efficient fault-tolerant method is essential to defend these information without any loss. This paper focuses on the revitalization of HA at the time of failure. The standby HAs are formed as clusters within the redundant HA set. Every home agent synchronize its bindings to the next highest preference value home agent. In this paper, we propose clustered redundant home agent (CRHA) protocol to maintain the binding association within the cluster. The simulation results show that our approach is better than the existing recovery schemes.

A. Avelin Diana, V. Ragavinodhini, K. Sundarakantham, S. Mercy Shalinie

Interaction Coupling: A Modern Coupling Extractor

Software development plays a vital role in interaction between the methods, classes, and attributes. Coupling is one of the most vibrant internal qualities to measure the design performance. Many object-oriented metrics have been proposed to evaluate different aspects of object-oriented program using coupling. This paper presents a new modern approach, which depicts the concept of interaction coupling, and a prototype is developed to measure the interaction coupling. Three types of metrics response for class (RFC), message-passing coupling (MPC), and method invocation coupling (MIC) that may invoke methods are analyzed, measured, and summarized.

S. Gomathi, P. Edith Linda

Secure Multicasting Protocols in Wireless Mesh Networks—A Survey

Security is considered as one of the most significant constraint for the recognition of any wireless networking technology. However, security in wireless mesh networks (WMN) is still in its infancy as little attention has been given to this topic by the research society. WMN is a budding technology that provides low-cost high-quality service to users as the “last mile” of the Internet. Multicasting is one of the major communication technologies primarily designed for bandwidth (BW) conservation and an efficient way of transferring data to a group of receivers in wireless mesh network. The goal of secured group communication is to ensure the group secrecy property such that it is computationally infeasible for an unauthorized member node to discover the group data. In this article, the comparative study on existing approaches has been carried out; in addition to it, the fundamental security requirements and the various security attacks in the field of secure multicasting in WMN have also been discussed.

Seetha Surlees, Sharmila Anand John Francis

Computational Models


A New Reversible SMG Gate and Its Application for Designing Two’s Complement Adder/Subtractor with Overflow Detection Logic for Quantum Computer-Based Systems

Reversible computation plays an important role in the synthesis of circuits having application in quantum computing-, low-power CMOS design-, bioinformatics-, and nanotechnology-based systems. Conventional logical circuits are not reversible. A reversible circuit maps each input vector, into a unique output vector and vice versa. A new 4 × 4 reversible full-adder gate called as SMG gate is suggested in this paper. Three approaches to design reversible two’s complement adder/subtractor with overflow detection logic are also proposed. The first approach is based on Toffoli and Feynman gates, second approach is based on Peres gate, and third approach is based on the new SMG gate. The proposed reversible circuits are evaluated in terms of number of quantum cost.

S. Manjula Gandhi, J. Devishree, S. Sathish Mohan

An Algorithm for Constructing Graceful Tree from an Arbitrary Tree

A function


is called graceful labeling of a graph




edges if


is an injective function from




) to {0, 1, 2, …,


} such that if every edge


is assigned the edge label |




) −




)|, then the resulting edge labels are distinct. A graph that admits graceful labeling is called a graceful graph. The popular graceful tree conjecture states that every tree is graceful. The graceful tree conjecture remains open over four decades. In this paper, we introduce a new method of constructing graceful trees from a given arbitrary tree by designing an exclusive algorithm.

G. Sethuraman, P. Ragukumar

Characterization of Semi-open Subcomplexes in Abstract Cellular Complex

The concept of abstract cellular complexes was introduced by Kovalevsky (Computer Vision, Graphics and Image Processing, 46:141–161, 1989) and established that the topology of cellular complex is the only possible topology of finite sets to describe the structure of images. Further, the topological notions of connectedness and continuity in abstract cellular complexes were introduced while using the notions of an open subcomplex, closed subcomplex, and boundary of a subcomplex, etc. In this paper, the notion of semi-open subcomplex in abstract cellular complex is introduced and some of its basic properties are studied by defining the notions of semi-closure, semi-frontier, and semi-interior. Further, a homogeneously


-dimensional complex is characterized while using the notion of semi-open subcomplexes. Introduced is also the concept of a quasi-solid in subcomplex. Finally, a new algorithm for tracing the semi-frontier of an image is presented.

N. Vijaya, G. Sai Sundara Krishnan

Fluid Queue Driven by an M/M/1 Queue Subject to Catastrophes

In this paper, we present the stationary analysis of a fluid queueing model modulated by an

$$ M/M/1 $$

queue subject to catastrophes. The explicit expressions for the joint probability of the state of the system and the content of the buffer under steady state are obtained in terms of modified Bessel function of first kind using continued fraction methodology.

K. V. Vijayashree, A. Anjuka

Fuzzy VEISV Epidemic Propagation Modeling for Network Worm Attack

An epidemic vulnerable—exposed—infectious—secured—vulnerable (VEISV) model for the fuzzy propagation of worms in computer network is formulated. In this paper, the comparison between classical basic reproduction number and fuzzy basic reproduction number is analyzed. Epidemic control strategies of worms in the computer network—low, medium, and high—are analyzed. Numerical illustration is provided to simulate and solve the set of equations.

Muthukrishnan Senthil Kumar, C. Veeramani

Hexagonal Prusa Grammar Model for Context-Free Hexagonal Picture Languages

Prusa Grammar is a recently introduced rectangular picture languages generating model which exploits the parallel application of two-dimensional context-free rules. We introduce the hexagonal version of Prusa grammar and generate images. We compare this model with other hexagonal array generating devices for the description of its generative power.

T. Kamaraj, D. G. Thomas

Iso-Triangular Pictures and Recognizability


systems generating two-dimensional languages have been studied. In this paper, the variants of


systems that generate hexagonal pictures have been extended to iso-triangular pictures drawn on triangular grid. Local and recognizable iso-triangular picture languages have been introduced, and an approach of recognizing them by iso-triangular tiling system and automata has also been discussed.

V. Devi Rajaselvi, T. Kalyani, D. G. Thomas

Job Block Scheduling with Dual Criteria and Sequence-Dependent Setup Time Involving Transportation Times

A two-machine flowshop scheduling with sequence-dependent setup time (SDST), job block, and transportation time is considered with the objective of minimizing makespan and the rental cost of machines taken on rent under a specified rental policy. The processing time of attributes on these machines is associated with probabilities. To find the optimal or near-optimal solution for these objectives, a heuristic algorithm is developed. To test the efficiency of the proposed heuristic algorithm, a numerical illustration is given.

Deepak Gupta, Kewal Krishan Nailwal, Sameer Sharma

Modular Chromatic Number of C m  □ C n

A modular




≥ 2, of a graph


is a coloring of the vertices of


with the elements in



having the property that for every two adjacent vertices of


, the sums of the colors of their neighbors are different in



. The minimum


for which


has a modular


-coloring is the modular chromatic number of


. In this paper, except for some special cases, modular chromatic number of





is determined.

N. Paramaguru, R. Sampathkumar

Network Analysis of Biserial Queues Linked with a Flowshop Scheduling System

This paper is an attempt to establish a linkage between networks of queues consisting of two parallel biserial servers connected with a common server in series and a multistage flowshop scheduling system having


machines. In the queue network, both the arrival and service patterns follow Poisson law. The objective of this paper is to develop an algorithm minimizing the total elapsed time with minimum completion time, average waiting time, and minimum queue length for the proposed queuing–scheduling linkage model. The efficiency of the proposed algorithm is tested by a numerical illustration.

Seema Sharma, Deepak Gupta, Sameer Sharma

Solving Multi-Objective Linear Fractional Programming Problem - First Order Taylor's Series Approximation Approach

In this paper, a method is proposed for solving multi-objective linear fractional programming (MOLFP) problem. Here, the MOLFP problem is transformed into an equivalent multi-objective linear programming (MOLP) problem. Using the first-order Taylor's series approximation, the MOLFP problem is reduced to single-objective linear programming (LP) problem. Finally, the solution of MOLFP problem is obtained by solving the resultant LP problem. The proposed procedure is verified with the existing methods through the numerical examples.

C. Veeramani, M. Sumathi

Voronoi Diagram-Based Geometric Approach for Social Network Analysis

Social network analysis is aimed at analyzing relationships between social network users. Such analysis aims at finding community detection, that is, group of closest people in a network. Usually, graph clustering techniques are used to identify groups. Here, we propose a computational geometric approach to analyze social network. A Voronoi diagram-based clustering algorithm is employed over embedded dataset in the Euclidean vector space to identify groups. Structure-preserving embedding technique is used to embed the social network dataset and learns a low-rank kernel matrix by means of a semi-definite program with linear constraints that captures the connectivity structure of the input graph.

Subu Surendran, D. Chitraprasad, M. R. Kaimal

Short Papers


Comparative Analysis of Discretization Methods for Gene Selection of Breast Cancer Gene Expression Data

DNA microarrays provide an enormous amount of information about genetically conditioned susceptibility to diseases. However, their analysis is uneasy because the number of genes is extremely large with respect to the number of experiments. The problem is that all genes are not essential in gene expression data. Some of the genes may be redundant, and others may be irrelevant and noisy. This research paper studies the gene expression data using rough set theory; it is an intelligent computing method. In this paper, we studied and implemented following discretization methods such as rough discretization (RD), naïve Bayes, max–min, equal width intervals,


-means-based discretization, and entropy-based discretization (EBD) for gene selection using rough set quick reduct (QR) for breast cancer gene expression data. Further, the performance of the above algorithms has been evaluated using classification tools available in Weka software and BPN classifier.

E. N. Sathishkumar, K. Thangavel, A. Nishama

Modified Soft Rough set for Multiclass Classification

Rough set theory has been applied to several domains because of its ability to handle imperfect knowledge. Most recent extension of rough set is soft rough set, where parameterized subsets of a universal set are basic building blocks for lower and upper approximations of a subset. In this paper, a new model of soft rough set, which is called modified soft rough set (MSR) where information granules are finer than soft rough sets, is applied for classification of medical data. In this paper, rough-set-based quick reduct approach is applied for selecting relevant features and MSR is applied for multiclass classification problem and the proposed work is compared with bijective soft set (BSS)-based classification, naïve Bayes, and decision table classifier algorithms based on evaluation metrics.

S. Senthilkumar, H. Hannah Inbarani, S. Udhayakumar

A Novel Genetic Algorithm for Effective Job Scheduling in Grid Environment

A grid is a set of resources such as CPU, memory, disk, applications, and database distributed over wide area networks and supports large-scale distributed applications. Resources in grid are geographically distributed and linked through Internet, to create virtual supercomputer with vast computing capacity to solve complex problems. Scheduling, resource brokering, and load balancing are the essential functionalities of grid environment. Evolutionary algorithms (EA) operate on a population of potential solutions, applying the principle of survival of the fittest. Genetic algorithms belong to a larger class of EA, which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection, and crossover. This paper proposes a scheduling technique based on genetic algorithm to schedule jobs effectively in a grid. The proposed algorithm is tested with different sizes of preemptive job requests, and analysis of results has shown significant improvement in scheduling performance.

P. Deepan Babu, T. Amudha

A Multifactor Biometric Authentication for the Cloud

The omnipresence of high-speed Internet has paved a promising future for cloud computing. Cloud computing technology deals with providing software, platform, and infrastructure as a service to the clients. This ultimately leads to transmission of the client’s confidential data in the cloud. This factor is the motivation to provide a secure biometric authentication framework to the cloud. This paper proposes a multifactor biometric authentication system for cloud computing environment. The biometric features adopted here are palm vein and fingerprint. The idea is to handle the biometric data in a secure fashion by storing the palm vein biometric data in multicomponent smart cards and fingerprint data in the central database of the cloud security server. In order to enhance security, the part of biometric data matching process is performed on the card with Match-on-Card technology and data never leave the smart card.

Shabana Ziyad, A. Kannammal

Accelerating the Performance of DES on GPU and a Visualization Tool in Microsoft Excel Spreadsheet

Graphic processing units (GPU) have attained a greater dimension based on their computational efficiency and flexibility compared to that of classical CPU systems. By utilizing the parallel execution capability of GPU, traditional CPU systems can handle complex computations effectively. In this work, we exploit the parallel structure of GPU and provide an improved parallel implementation for data encryption standard (DES), one of the famous symmetric key cryptosystems. We also developed a visualization tool for DES in Microsoft Excel Spreadsheet which helps the students to understand the primitive operations that constitute the DES cryptosystem clearly. The main objective of this work is to investigate the strength of parallel implementation, on the basis of execution time on GPU as well as on CPU systems.

Pinchu Prabha, O. K. Sikha, M. Suchithra, K. P. Soman


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!