Skip to main content

2016 | Buch

Proceedings of Fifth International Conference on Soft Computing for Problem Solving

SocProS 2015, Volume 1

herausgegeben von: Millie Pant, Kusum Deep, Jagdish Chand Bansal, Atulya Nagar, Kedar Nath Das

Verlag: Springer Singapore

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

The proceedings of SocProS 2015 will serve as an academic bonanza for scientists and researchers working in the field of Soft Computing. This book contains theoretical as well as practical aspects using fuzzy logic, neural networks, evolutionary algorithms, swarm intelligence algorithms, etc., with many applications under the umbrella of ‘Soft Computing’. The book will be beneficial for young as well as experienced researchers dealing across complex and intricate real world problems for which finding a solution by traditional methods is a difficult task.

The different application areas covered in the proceedings are: Image Processing, Cryptanalysis, Industrial Optimization, Supply Chain Management, Newly Proposed Nature Inspired Algorithms, Signal Processing, Problems related to Medical and Health Care, Networking Optimization Problems, etc.

Inhaltsverzeichnis

Frontmatter
Optimization of Nonlocal Means Filtering Technique for Denoising Magnetic Resonance Images: A Review

Magnetic resonance images are affected by noise of various types, which provide a hindrance to accurate diagnosis. Thus, noise reduction is still an important and difficult task in case of MRI. The objective behind denoising of images is to effectively decrease the unwanted noise by retaining the image features. Many techniques have been proposed for denoising MR images, and each technique has its own advantages and drawbacks. Nonlocal means (NLM) is a popular denoising algorithm for MR images. But it cannot be applied in its original form to different applications. The goal of this paper is to present the various optimization techniques for NLM filtering approach to reduce the noise present in MRIs. The original NLM filters along with its various advancements and mathematical models have been included.

Nikita Joshi, Sarika Jain
A Production Model with Stock-Dependent Demand, Partial Backlogging, Weibull Distribution Deterioration, and Customer Returns

In this paper, we derive an economic production model having two-parameter Weibull distribution deterioration. In this model, we considered a demand rate that depends on price stock and indirectly on time. Shortage is allowed and partially backlogged. We assume that customer return is a factor of quantity sold, price, and inventory level. Time horizon is finite. Production is also dependent on demand. The goal of this production is to maximize the profit function. An illustrative example, sensitivity analysis, and a graphical representation are used to interpret the usefulness of this model.

Chaman Singh, Kamna Sharma, S. R. Singh
Chemo-inspired Genetic Algorithm and Application to Model Order Reduction Problem

During the past three decades, evolutionary computing techniques have grown manifold in tackling all sorts of optimization problems. Genetic algorithm (GA) is one of the most popular EAs because it is easy to implement and is conducive for noisy environment. Similarly, amongst several swarm intelligence techniques, bacterial foraging optimization (BFO) is the recent popular algorithm being used for many practical applications. Depending on the complexity of the problem concerned, there is need for hybridized techniques which help in balancing exploration and exploitation capability over the search space. Many hybridized techniques have been developed recently to tackle such problems. This paper proposes a hybridization of GA and BFO to solve a real-life unconstrained electrical engineering problem. This unconstrained optimization problem is a model order reduction (MOR) problem of linear time invariant continuous single input and single output (SISO) system.

Rajashree Mishra, Kedar Nath Das
Grammatical Evolution Using Fireworks Algorithm

Grammatical Evolution generates computer programs automatically in any arbitrary language using Backus-Naur Form of Context-free Grammar in automatic programming. Variable-length Genetic Algorithm is used as a learning algorithm in Grammatical Evolution. Fireworks algorithm is a recently developed new Swarm Intelligent algorithm used for function optimization. This paper proposes Grammatical Fireworks algorithm which uses Fireworks algorithm as a learning algorithm in place of variable-length Genetic Algorithm in Grammatical Evolution to evolve computer programs automatically. Grammatical Fireworks algorithm is applied on three well-known benchmark problems such as Santa Fe ant trail, symbolic regression and 3-input multiplexer problems. A comparative study is made with Grammatical Evolution, Grammatical Swarm, Grammatical Artificial Bee Colony, and Grammatical Differential Evolution. The experimental results demonstrate that the proposed Grammatical Fireworks algorithm can be applied in automatic computer program generation.

Tapas Si
Gaussian Function-Based Particle Swarm Optimization

This paper presents the Gaussian function-based particle swarm optimization (PSO) algorithm. In canonical PSO, potential solutions, called particles, are randomly initialized in the beginning. The proposed method uses the solutions of another evolutionary computation technique called genetic algorithm (GA) for initializing the particles in order to provide feasible solutions to start the algorithm. The method replaces the random component of the velocity update equation of PSO with the Gaussian membership function. The Gaussian function-based PSO is applied on eight benchmark functions of optimization and the results show that the proposed method achieves the same quality solution in significantly fewer fitness evaluations. This proposed modification of PSO will be useful to optimize efficiently.

Priyadarshini Rai, Madasu Hanmandlu
Reinforcing Particle System Effects Using Object-Oriented Approach and Real-Time Fluid Dynamics for Next-Generation 3D Gaming

The focus of this paper is to use fluid dynamics and object-oriented approach in particle subsystems for interactive real-time visualization in 3D gaming. By exploiting all the areas of fluid dynamics and exercising it in the particle subsystem of gaming engine, we create an environment where we can map an artificial object to a real-world entity to such an extent that it can add realism to the virtual gaming environment. To do so, particle integration and particle rendering play an important part. Also, graphics accelerators and graphics processor units add a mammoth advantage calculating the particle coordinates followed by rendering. The current graphic processor though fast, fails to give the exact trajectory of particles when collision, sliding, outburst, explosion or stabbing by one character to another character occurs. They lack in tracing the exact path followed by the particles when certain forces are applied on a heavenly body or on liquidus body. The particle that emerges is random and has no correlation to reality as they follow random path. All these must be considered to enhance the user interaction in 3D gaming to enrich the real-time virtual environment gaming experience. Many particle subsystems tools are available that can create extraordinary particle effects in gaming, but they all lack in giving directions to the particle. Thus in this paper, we create a particle subsystem that is not only stable but also follows the law of physics so that each particle in the gaming environment can be advanced with random time steps. The principal motivation behind the paper is to examine the flow of particles when they are embedded with the laws of fluid dynamics and to calculate the rendering complexity which makes it impossible to implement it on the large scale on conventional graphic processing unit without using quantum technology.

Rajesh Prasad Singh, Rashmi Dubey, Sugandha Agarwal
Searchless Fractal Image Compression Using Fast DCT and Real DCT

Growing need for pictorial data in information era makes image storage and transmission very expensive. Fast algorithms to compress visual information without degrading the quality are of utmost importance. To overcome this problem, this paper proposes new methods to reduce the encoding time for no search fractal image compression in DCT domain by curtailing the computational complexity of the discrete cosine transform (DCT) equations. Fast DCT and real DCT are the techniques, which are employed for the purpose of increasing the performance of searchless DCT. FDCT uses the concept of fast Fourier transform (FFT) which acts as fast discrete cosine transform (FDCT). Real DCT performs only real calculations and omits the imaginary complexity of the DCT. Proposed methods perform the calculations involved in DCT to be computed faster while keeping the quality of the image as much as nearly possible. Furthermore, the experimental results specified show the effectiveness of the methods being proposed for grayscale images.

Preedhi Garg, Richa Gupta, Rajesh K. Tyagi
Study of Neighborhood Search-Based Fractal Image Encoding

Fractal image encoding is one of the famous lossy encoding techniques ascertain high compression ratio, higher peak signal-to-noise ratio (PSNR), and good quality of encoded image. Fractal image compression uses the self-similarity property present in the natural image and similarity measure. The main drawback fractal image encoding suffers from is significant time consumption in search of appropriate domain for each range of image blocks. There have been various researches carried out to overcome the limitation of fractal encoding and to speed up the encoder. Initially, various classification and partitioning schemes were used to reduce the search space. A remarkable improvement was made by neighborhood region search strategies, which classify the image blocks on the basis of some feature vectors of image to restrict the region for best matching pair of domain and range, and also reduces the search complexity to logarithmic time. In this paper, three image block preprocessing approaches using neighborhood search method are explained in different domains and all these approaches are compared on the basis of their simulation results.

Indu Aggarwal, Richa Gupta
An Optimized Color Image Watermarking Technique Using Differential Evolution and SVD–DWT Domain

We present a new image watermarking technique for color images. Tradeoff between imperceptibility, robustness, and security is achieved by this technique. This technique is based on discrete wavelet transform (DWT) and singular valued decomposition (SVD). Advantage of both DWT and SVD has been utilized to achieve more robustness. Differential evolution (DE) is used for optimization of scaling factors. DE is incorporated to find optimal scaling factors to scale down the watermark and increase the visual quality and robustness of watermarked image. Color cover image is split into three color channels using RGB color space. To improve the security, watermark is scrambled before embedding using random Bit-Plane and XOR operation. Experimental results show that the watermarked image is imperceptible and resistant to various geometric attacks and image processing.

Priyanka, Sushila Maheshkar
One Day Ahead Forecast of Pan Evaporation at Pali Using Genetic Programming

Forecasting of pan evaporation is important for management of water resources. Evaporation process is highly nonlinear and complex. Hydrologists therefore try another technique in place of traditional deterministic and conceptual models to forecast pan evaporation with relative simplicity and accuracy. The present work uses genetic programming (GP) and model tree (MT) to forecast pan evaporation one day ahead at Pali in the Raigad district of Maharashtra, India. Daily minimum and maximum humidity, minimum and maximum temperature, wind speed, pan water temperature, and sunshine were the seven input parameters. Both models performed well for 2 years data with accuracy of prediction. Excellence of GP model is proved with correlation coefficient between GP forecasted and observed pan evaporation (r = 0.97), least error (MSRE = 0.012 mm/day), and high index of agreement (d = 0.98). These models can be useful for hydrologists and farm water managers.

Narhari Dattatraya Chaudhari, Neha Narhari Chaudhari
Optimized Scenario of Temperature Forecasting using SOA and Soft Computing Techniques

Weather forecasting at a given instant of time and location is a challenging activity as its data are continuous, highly intensive, multidimensional, and dynamic in nature. This paper presents an approach for maximum temperature forecasting over a given period of time using the service-oriented architecture (SOA) and soft computing techniques. SOA is used for collecting data of a particular location using the principles of SOA, i.e., reusability, interoperability, and composability. Large number of attributes of weather dataset gathered with the help of SOA concept can be curtailed sing one of the soft computing techniques, i.e., rough set theory (RST). The RST technique works by finding the relevant attributes and eliminating the irrelevant attribute which are not essential. The residual attributes are used to forecast temperature based on artificial neural network (ANN) technique. RST technique has been applied to improve the performance of ANN computationally as well as by its accuracy.

Amar Nath, Rajdeep Niyogi, Santanu Kumar Rath
Software Reliability Prediction Using Machine Learning Techniques

Software reliability is an indispensable part of software quality. Software industry endures various challenges in developing highly reliable software. Application of machine learning (ML) techniques for software reliability prediction has shown meticulous and remarkable results. In this paper, we propose the use of machine learning techniques for software reliability prediction and evaluate them based on selected performance criteria. We have applied ML techniques including adaptive neuro fuzzy inference system (ANFIS), feed forward backpropagation neural network (FFBPNN), general regression neural network (GRNN), support vector machines (SVM), multilayer perceptron (MLP), bagging, cascading forward backpropagation neural network (CFBPNN), instance-based learning (IBK), linear regression (Lin Reg), M5P, reduced error pruning tree (reptree), and M5Rules to predict the software reliability on various datasets being chosen from industrial software. Based on the experiments conducted, it was observed that ANFIS yields better results and it predicts the reliability more accurately and precisely as compared to all the above-mentioned techniques. In this study, we also made comparative analysis between cumulative failure data and inter failure time data and found that cumulative failure data give better and more promising results as compared to inter failure time data.

Arunima Jaiswal, Ruchika Malhotra
A Genetic Algorithm Based Scheduling Algorithm for Grid Computing Environments

A grid computing environment is a parallel and distributed environment in which various computing capabilities are brought together to solve large size computational problems. Task scheduling is a crucial issue for grid computing environments; so it needs to be addressed efficiently to minimize the overall execution time. Directed acyclic graphs (DAGs) can be used as task graphs to be scheduled on grid computing systems. The proposed study presents a genetic algorithm for efficient scheduling of task graphs represented by DAG on grid systems. The proposed algorithm is implemented and evaluated using five real datasets taken from the literature. The result shows that the proposed algorithm outperforms other popular algorithms in a number of scenarios.

Poonam Panwar, Shivani Sachdeva, Satish Rana
Effect of Imperfect Debugging on Prediction of Remaining Faults in Software

Software reliability growth models have been used in the literature for estimating the remaining faults in the software based on the failure data collected during its testing phase. Most of the software reliability growth model (SRGMs) are based on the assumption of perfect debugging. However, in reality, this assumption may not be reasonable because imperfect debugging can occur in software development. Therefore, it will be interesting to study the effect of imperfect debugging on the prediction of remaining faults in the software. In this paper, an approach to estimate remaining faults in software using perfect and imperfect software reliability growth models is proposed. The proposed approach is applied on five distinct real datasets for the demonstration of how well these SRGMs predicted the expected total number of faults in case of perfect and imperfect debugging.

Poonam Panwar, Ravneet Kaur
Multiple Document Summarization Using Text-Based Keyword Extraction

The main focus of the paper is on the comparison between the proposed methodology keyword-based text extraction using threading and synchronization just like multiple files input as batch processing and previously used technologies for text extraction from research papers. Keyword-based summary is defined as selecting important sentences from actual text. Text summarization is the condensed form of any type of document whether pdf, doc, or txt files but this condensed form should preserve complete information and meaningful text with the help of single input file and multiple input file. It is not an easy task for human being to maintain the summary of large number of documents. Various text summarizations and text extraction techniques are being explained in this paper. Our proposed technique creates the summary by extracting sentences from the original document with the font type and pdf font or keyword extractor.

Deepak Motwani, A. S. Saxena
Implementation of the Principle of Jamming for Hulk Gripper Remotely Controlled by Raspberry Pi

The Hulk Gripper has been constructed by replacing the fingers of a robotic hand with a mass filled with granular material, e.g., grounded coffee. This mass applies pressure on the article due to which the gripper adapts to the surface and engrosses it. Using the vacuum pump attached at the other end, air is pumped out making the granules contracted and hardened. It has been discovered that the volume change of approximately 0.5 is quite adequate to grab the object infallibly and lift them with a very large force. The ability of the granules to jam against each other in vacuum and unjam with air around is called as the principle of operation. The grip of the hand is based on three different mechanisms: friction, suction, and interlocking, which contributes to the engrossing force. With the help of all the mechanisms involved, it becomes possible in lifting heavy objects, exposing new prospects of design, with the ability to stand out quick engrossing of complicated objects. Our gripper is controlled mechanically with the help of radio frequencies. We are controlling the gripper with an android application. The mobile application will give commands to the microcontroller via Wi-Fi or Bluetooth, which in turn controls the movement of the gripper. The microcontroller being used turns the vacuum pump on and off. The existing gripper requires number of small and large joints which are to be controlled individually, in order to lift objects of different sizes, shapes, and delicacies whereas, our gripper uses a single point of contact to form the grip and do its task.

Seema Rawat, Praveen Kumar, Geetika Jain
Differential Evolution: An Overview

Differential evolution (DE) is one of the most influential optimization algorithms up-to-date. DE works through analogous computational steps as used by a standard evolutionary algorithm. Nevertheless, not like traditional Evolutionary Algorithms, the DE-variants agitate the current generation populace members with the scaled differences of indiscriminately preferred and dissimilar population members. Consequently, no discrete probability dissemination has to be utilized for producing the offspring. Ever since its commencement in 1995, DE has dragged the interest of numerous researchers around the globe ensuing in a lot of alternative of the fundamental algorithm with enhanced working. This paper introduces a comprehensive review of the basic conception of a DE and an inspection of its key alternatives and the academic studies carried out on DE up to now.

Amritpal Singh, Sushil Kumar
Multi-objective Parametric Query Optimization for Distributed Database Systems

A classical query optimization compares solutions on single cost metric, not capable for multiple costs. A multi-objective parametric optimization (MPQ) approach is potentially capable for optimization over multiple cost metrics and query parameters. This paper demonstrated an approach for multi-objective parametric query optimization (MPQO) for advanced database systems such as distributed database systems (DDBS). The query equivalent plans are compared according to multiple cost metrics and query related parameters (modeled by a function on metrics), cost metrics, and query parameters are semantically different and computed at different stage of optimization. MPQO also generalizes parametric optimization by catering the multiple metrics for query optimization. In this paper, performance of MPQO variants based on nature-inspired optimization; ‘Multi-Objective Genetic Algorithm’ and a parameter-less optimization ‘Teaching-learning- based optimization’ are also analyzed. MPQO builds a parametric space of query plans and progressively explores the multi-objective space according to user tradeoffs on query metrics. In heterogeneous and distributed database system, logically unified data is replicated and distributed across multiple distributed sites to achieve high reliable and available data system; this imposed a challenge on evaluation of Pareto set. An MPQO attempt exhaustively determines the optimal query plans on each end of parametric space.

Vikram Singh
Energy Efficient Routing Protocol for MANET Using Vague Set

In the prevailing epoch, the application of mobile ad hoc networks (MANET) has risen quickly. All the nodes of the network communicate directly with each other to carve up information within the assortment. The network is vigorous and infrastructureless. So the topology of this network is able to amend very commonly. MANET nodes are power-driven through narrow capacity battery and due to this sometimes nodes cannot successfully broadcast data packets from source node to destination node. In this chapter, we propose a new energy efficient routing protocol for MANET using vague set. The main aim of this proposed protocol is to choose an energy efficient route that diminishes energy expenditure of MANET based on the scheme of vague set. The proposed scheme is primarily used for interval-based membership where each parameter of energy efficient routing (i.e. energy and distance) is characterized by true and false membership functions. Therefore, this approach helps to determine the energy efficient route. The simulation of proposed protocol by NS2 and relative study with available protocol AODV is scrutinized wherein proposed routing protocol improves the performance of MANET based on throughput, average end-to-end delay, packet delivery ratio and packet loss.

Santosh Kumar Das, Sachin Tripathi
Data Storage Security in Cloud Paradigm

The advent of social networking has given rise to the huge data processing in terms of image and video streams. This, in turn, increased the use of cloud computing services by the users. Secure data storage and access are the main challenges in front of the cloud scenario. This paper reports a novel method of multimedia data security in the cloud paradigm. The proposed method watermarks and compresses the data before its storage in the cloud. This approach not only safeguards the data storage but also reduces the storage requirement and allied monitory overheads. The simulation result shows a 7 and 36 % use of CPU and memory capacity, which overrules the additional hardware requirement for the proposed module in the cloud paradigm.

Prachi Deshpande, S. C. Sharma, Sateesh K. Peddoju
Axisymmetric Vibrations of Variable Thickness Functionally Graded Clamped Circular Plate

The axisymmetric vibrations of functionally graded clamped circular plate have been analysed on the basis of classical plate theory. The material properties, i.e. Young’s modulus and density vary continuously through the thickness of the plate, and obey a power law distribution of the volume fraction of the constituent materials. A semi-analytical technique, i.e. differential transform method has been employed to solve the differential equation governing the equation of motion. The effect of various plate parameters, i.e. volume fraction index g and taper parameter γ have been studied on the first three modes of vibration. Three-dimensional mode shapes for the first three modes of vibration have been presented. A comparison of results with those available in the literature has been given.

Neha Ahlawat, Roshan Lal
Performance Evaluation of Geometric-Based Hybrid Approach for Facial Feature Localization

Nowadays, facial recognition technology (FRT) has come into focus because of its various applications in security and non-security perspective. It provides a secure solution for identification and verification of person identity. Accurate localization of facial features plays a significant role for many facial analysis applications including biometrics and emotion recognition. There are several factors that make facial feature localization a challenging problem. Facial expression is one of the influential factors of FRT. The paper proposes a new geometric-based hybrid technique for automatic localization of facial features in frontal and near-frontal neutral and expressive face images. A graphical user interface (GUI) is designed that could automatically localize 16 landmark points around eyes, nose, and mouth that are mostly affected by the changes in facial muscles. The proposed system has been tested on widely used JAFFE and Bosphorous database. Also, the system is tested on DeitY-TU face database. The performance of the proposed method has been done in terms of error measures and accuracy. The detection rate of the proposed method is 96.03 % on JAFFE database, 94.06 % on DeitY-TU database, and 94.21 % on Bosphorous database.

Sourav Dey Roy, Priya Saha, Mrinal Kanti Bhowmik, Debanjana Debnath
Optimal Land Allocation in Agricultural Production Planning Using Fuzzy Goal Programming

Agricultural production is dependent on several imprecise factors and therefore the parameters used in defining fuzzy goals in agricultural production system should be imprecise rather than crisp. Thus for modeling of such systems, we take coefficients defining fuzzy goal as fuzzy number rather than crisp one. In this paper, we deal with the agricultural production planning problem as undertaken by Ghosh et al. (Opsearch 30(1):15–34, 1993) in the more realistic case of having fuzzy inequality with fuzzy coefficients. We transformed the problem as fuzzy goal programming problem and used the triangular possibility distribution for obtaining solution. The results obtained have been compared with the existing one to show its superiority.

Babita Mishra, S. R. Singh
Quantitative Estimation for Impact of Genomic Features Responsible for 5′ and 3′ UTR Formation in Human Genome

UnTranslated Regions (UTRs) are part of messenger ribonucleic acid (mRNA) that do not undergo protein translation mechanism but plays an important role in translation control. Various genomic and non genomic features are responsible for controlling the translation. We have attempted to find various genomic features and their information content that are contributing to the length of UTRs. With the increase in length of UTRs, the translation process becomes slower resulting into less protein output. In this study results revealed that as length of 5′ UTR and 3′ UTR increase the information content of the sequence also increase but it becomes stable at longer UTR. Trimeric features are having more information content as compared to Dimeric features. As length of UTR increase the entropy of the information increases but after certain length it becomes stable. As 5′ UTR length increases the GC content decreases while AT increases and it is opposite in 3′ UTRs. Some genomic features like CG, TAA, CGT, CGC, CCG, CGG, ACG are having correlation <0.70 where as features like CT, TC, AC, CA, GT, GA. ACT, CAT, CTT, TCA, TGA are having correlation >0.90.

Shailesh Kumar, Sumita Kachhwaha, S. L. Kothari
Comparison of Multilayer Perceptron (MLP) and Support Vector Machine (SVM) in Predicting Green Pellet Characteristics of Manganese Concentrate

A huge portion of available minerals and materials are in the form of fine powder that makes their management and utilization a tedious job. Pelletization, a size enlargement technique, is used to tackle aforementioned problems and considered as a combination of two subprocesses; wet or green pelletization and induration. Green pelletization is highly sensitive to the slightest variation in operating conditions. As a result, identification of the impact of varying parameters on the behaviour of the process is a challenging task. In this paper, we employ MLP and SVM, two soft computing methods, to exhibit their applicability in predicting pellet characteristics. The scarcity of training data is addressed by employing genetic algorithm. Results demonstrate the better accuracy of MLP over SVM in forecasting green pellet attributes.

Mohammad Nadeem, Haider Banka, R. Venugopal
Audio Pattern Recognition and Mood Detection System

Music is and has been an integral part of our society since time immemorial. It is a subtle display of a person’s emotions. Over the decades even though the way music is composed or heard has greatly evolved but what has remained constant is the entwined relationship it shares with mood. The kind of music one listens is to be governed solely by their mood at that instant. This paper proposes an automated and efficient method of classifying music on the basis of the mood it depicts, by extracting suitable features that show significant variation across songs. A database of 300 popular Bollywood songs was taken into consideration in which timbral and temporal features were extracted to classify songs into four moods: Happy, sad, relaxed and romantic. 200 songs were used to train the model by using Multilayer perceptron with backpropagation algorithm. The model exhibited an accuracy of 75 % when tested over a set of 100 songs.

Priyanka Tyagi, Abhishek Mehrotra, Shanu Sharma, Sushil Kumar
Sumdoc: A Unified Approach for Automatic Text Summarization

In this paper, we focus on the task of automatic text summarization. Lot of work has already been carried out on automatic text summarization though most of the work done in this field is on extracted summaries. We have developed a tool that summarizes the given text. We have used several NLP features and machine learning techniques for text summarizing. We have also showed how WordNet can be used to obtain abstractive summarization. We are using an approach that first extracts sentences from the given text by using ranking algorithm, by means of which we rank the sentence on the basis of many features comprising of some classical features as well as some novel ones. Then, after extracting candidate sentences, we investigate some of the words and phrases and transform them into their respective simple substitutes so as to make the final summary a hybrid summarization technique.

Mudasir Mohd, Muzaffar Bashir Shah, Shabir Ahmad Bhat, Ummer Bashir Kawa, Hilal Ahmad Khanday, Abid Hussain Wani, Mohsin Altaf Wani, Rana Hashmy
Improvising and Optimizing Resource Utilization in Big Data Processing

This paper is to improvising and optimizing the scenario of Big data processing in cloud computing. A homogeneous cluster setup supports static nature of processing which is a huge disadvantage for optimizing the response time towards clients. In order to avail utmost client satisfaction, the host server needs to be upgraded with the latest technology to fulfil all requirements. Big data processing is a common frequent event in today’s Internet and the proposed framework improvises the response time. This will also make sure that the user gets its entire requirement fulfilled in optimal time. In order to avail utmost client satisfaction, the server needs to eliminate homogeneous cluster setup that is encountered usually in parallel data processing. The homogeneous cluster setup is static in nature and dynamic allocation of resources is not possible in this kind of environment. This will improve the overall resource utilization and, consequently, reduce the processing cost.

Praveen Kumar, Vijay Singh Rathore
Region-Based Prediction and Quality Measurements for Medical Image Compression

This paper presents a prediction-based compression algorithm for medical image containing region of interest. The medical image on the whole consumes a lot of memory space which makes them difficult for storage and transmission. In a medical image with only a particular part needed for diagnosis, the important decision that has to be made is whether to go for block compression or region-based compression. Here region-based compression plays a vital role since a particular region alone can be preserved and the other regions can be compressed in a lossy way. Such methods are of great interest in tele-radiology applications with large storage requirements. Here, the quality of compression can be measured by capturing the size of the selected regions. A new method for calculating total compression ratio and total bits per pixel is proposed for such selective image compression algorithms. Since selected area of medical images is compressed lossless, the performance of the proposed system is compared with other lossless compression algorithms. The results showed comparatively good performance.

P. Eben Sophia, J. Anitha
Recent Advancements in Energy Efficient Routing in Wireless Sensor Networks: A Survey

Wireless sensor networks have set a new realm in the field of wireless transmission technology. Their applications have diversified over the years and now they cover various sophisticated areas of applications which include military applications, surveillance, agriculture, monitoring and control, etc. This paper covers various recent developments in the field of energy-aware routing techniques to minimize the energy consumption and extend the lifetime of wireless sensor networks.

Gaurav Kumar Pandey, Amritpal Singh
A Novel Approach for Market Prediction Using Differential Evolution and Genetic Algorithm

A novel approach is proposed for the purpose of market analysis by optimizing the reviews of customers using differential evolutionary algorithm. The approach is further compared with the genetic algorithm for improved results analysis. The customer reviews are analyzed in terms of their hidden sentiments and these sentiments form the basis for the recommendation of a product in comparison to the other product reviews. The differential evolutionary and the genetic algorithms provide an advantage of optimized Sentiwords analysis and further enabling a more efficient product recommendation in terms of the reviews of that product, plus more.

Apoorva Gupta, Manoj Kumar, Sushil Kumar
A Novel Approach for Actuation of Robotic Arm Using EEG and Video Processing

In today’s fascinating world of technology, much advancement is being done in various fields of science. However, the field which is showing the most rapid growth in modern times is brain–computer interface. One extremely effective tool for this purpose is Emotiv EPOC headset. The present research focuses on the process of creating a novel BCI that makes the use of the Emotiv EPOC system to measure EEG waves and then it consequently controls the robot. The experiments are performed on 30 different subjects and the obtained results have been analyzed to confirm the usage of the data for the actuation and the control of numerous actuators. The paper presents how video processing can be used to control the robotic arm. The use of video processing gives a new dimension to the variety of applications. The objective is also to provide a low-cost brain-controlled robotic arm.

Saurin Sheth, Saurabh Saboo, Harsh Dhanesha, Love Rajai, Prakash Dholariya, Parth Jetani
A Survey: Artificial Neural Network for Character Recognition

Due to advancement in technology many recognition task have been automated. Optical Character Recognition (OCR) aims to convert the images of handwritten or printed text into a format that is capable for a machine to understand and process it. For the recognition to be precise various properties are calculated, on the basis of which characters are classified and recognized. Character recognition has been an attractive area for researchers using the Artificial Intelligence. Recognition is easy for humans, but what about machines? Advancement in Artificial Intelligence has led to the developments of various devices. The open issue is to recognize documents both in printed and written format. Character recognition is widely used for authentication of person as well as document. OCR is a technique where digital image that contains machine printed or handwritten input into software and translating it into a machine readable digital format. A Neural network can be designed the way in which the brain performs a particular task or function of interest. In this paper we present the survey of how efficient an Artificial Neural network can be utilized for character recognition process.

Mrudang D. Pandya, R. Patel Jay
Crosstalk Noise Voltage Analysis in Global Interconnects

The rapid growth of VLSI technology is generally due to the continuous reduction in the feature size of device. The work of an interconnect is to distribute the data signals and to make available power or ground to and among the different circuit functions on the chip. As the scaling of devices, the traditional transistor has thus far met the challenges like crosstalk, coupling, and noise margin. Due to this, scaling of interconnects has become one of the performance limiting factor for the new VLSI designs. As the advancements in process technology scaling are going on, the spacing between the adjacent interconnect wires keeps shrinking. This causes an increase in coupling capacitance between the interconnect wires. Hence, coupling noise became an important part which must be taken into account while performing timing verification for VLSI chips. The crosstalk generated due to switching of signals will induce noise onto nearby lines which can further deteriorate the signal integrity and reduce noise margins. These aspects of crosstalk make the system performance dependent on data patterns, switching rates, and line-to-line spacing.

Purushottam Kumawat, Gaurav Soni
A Comparative Analysis of Copper and Carbon Nanotubes-Based Global Interconnects in 32 nm Technology

At a high-pace advancements in the technologies today and their ubiquitous use, speed and size, has been the important aspects in VLSI interconnect. Channel length of device decreases to tens of nanometers, as the technology is shifting to the deep submicron level. Hence, the die size and device density of the circuit increase rapidly. This increase makes the requirement of long interconnects in VLSI chips. Long interconnects lead to increase in propagation delay of the signal. In deep submicron meter VLSI technologies, it has become increasingly difficult for conventional copper-based electrical interconnects to gratify the design requirements of delay, power, and bandwidth. Promising candidate to solve this problem is carbon nanotube (CNT). In this paper, the prospects of carbon nanotubes (CNT) as global interconnects for future VLSI Circuits have been examined. Due to high thermal conductivity and large current carrying capacity, CNTs are favored over copper as VLSI future interconnects. The energy, power, propagation delay, and bandwidth of CNT bundle interconnects have been examined and compared with that of the Cu interconnects at the 32-nm technology node at two different global interconnects lengths. The simulation has been carried out using HSPICE circuit simulator with a transmission line model at 200 and 1000 μm lengths. The results show that power consumption and energy of CNT-based interconnects are reduced by 66.49 and 66.86 %, respectively, at 200 μm length in comparison with the Cu-based Interconnects. At 1000 μm length, a reduction of 43.90 and 44.04 % has been observed in power consumption and energy, respectively, using CNT interconnects. Furthermore, the propagation delay is reduced approximately 61.17 % for 200 μm and 69.13 % for 1000 μm length while the bandwidth increases up to 90 %. This work suggests single-wall carbon nanotubes (SWCNT) bundle interconnects for global interconnects in VLSI designing as they devour low energy and are faster when compared with conventional copper wires.

Arti Joshi, Gaurav Soni
Comparative Analysis of Si-MOSFET and CNFET-Based 28T Full Adder

In this paper, 28T CNFET-based full adder circuit is proposed. With the increase in the number of transistors and speed per unit chip area, power consumption of VLSI circuits has also increased. Power has become an extremely important design constraint along with the area and speed in modern VLSI design. So, carbon nanotubes with their superior properties, high current drivability, and high thermal conductivities have emerged as potential alternative devices to the CMOS technology. In this paper, average power consumption, energy and delay of Si MOSFET and CNFET-based full adder have been analyzed. The simulation was carried out using HSPICE circuit simulator. The simulation results show that power consumption, energy, and PDP of CNFET-based full adder is 56, 54.74, and 59 % reduced, respectively, in comparison to the Si MOSFET-based full adder. Moreover, the delay is also reduced approx by 8.69 % for sum output and 8.63 % for carry output.

Rishika Sethi, Gaurav Soni
Cuckoo Search-Based Scale Value Optimization for Enhancement in Retinal Image Registration

Retinal image registration has significant role in various medical applications such as diabetic retinopathy, glaucoma, and many other retinal diagnosis applications. Contrast enhancement plays vital role in disease identification. In this paper, we proposed an enhancement method for intensity-based retinal image registration. In our approach, simulated images are blurred images using gaussian filter. Scale value for transformation is optimized using cuckoo search algorithm. The resultant enhanced images show better values in terms of PSNR (peak signal-to-noise ratio) and RMSE (root mean square error) which ultimately results in quality retinal image registration.

Ebenezer Daniel, J. Anitha
Optimal Demand-Side Bidding Using Evolutionary Algorithm in Deregulated Environment

This paper presents an efficient and optimization proficiency for minimization of fuel cost and losses of an electrical system in a completely deregulated power system. Single-side bidding and double-side bidding both cases are considered in this paper with the help of sequential quadratic programming (SQP) and evolutionary algorithm like firefly algorithm (FA) and cuckoo search algorithm (CSA) for checking the effectiveness of the presented approach. Modified IEEE 14 bus test system and modified IEEE 30 bus test system are considered for validating and analyzing the impact of proposed approach.

Subhojit Dawn, Sadhan Gope, Prashant Kumar Tiwari, Arup Kumar Goswami
Optimized Point Robot Path Planning in Cluttered Environment Using GA

In this paper, an optimized path planning for mobile robot by using genetic algorithm is analyzed. A hybrid method based on the visible midpoint and genetic algorithm is implemented for finding optimal shortest path for a mobile robot. The combination of both the algorithms provides a better solution in case of shortest and safest path. Here the visible approach is efficient for avoiding local minima and generates the paths which are always lying on free trajectories. Genetic algorithm optimizes the path and provides the shortest route from source to destination.

Motahar Reza, Saroj K. Satapathy, Subhashree Pattnaik, Deepak R. Panda
ITMS (Intelligent Traffic Management System)

In the present work, an ITMS (Intelligent Traffic Management System) is used for managing and controlling traffic lights based on photoelectric sensors placed on one side of the road. The suitable space between each sensor is selected by the traffic control authority. As a result, the traffic control authority can supervise vehicle that are running toward a particular traffic direction and can thereby manage transmission of information signal to microcontrollers which are fixed to the traffic control cabinet. An Arduino is a microcontroller which is capable of managing the traffic signals by using the information sent by the infrared sensors. In case of emergency, this system can pass the ministries vehicles, ambulance, and fire brigade buses that oblige urgent opening, i.e., clearance from traffic signal system by using RFID-based technology.

Rahul Kumar, Kunal Gupta
Intelligent Parking Management System Using RFID

In India the parking management system we have is manually controlled. This causes long queues for parking and traffic in the road in search for a free parking slot. So we need a system which saves people’s time and reduces the emission from the vehicles. This can be done by the use of RFID technology in our parking management system. With RFID technology we can automate our parking management system and reduce long queues for parking vehicles. In this paper we have implemented the RFID technology into the parking management system using Arduino for connections and Visual Studio with SQL Server Management Studio for database.

Priyanka Singh, Kunal Gupta
Performance Analysis of DE over K-Means Proposed Model of Soft Computing

In real-world data increased periodically, huge amount of data is called Big data. It is a well-known term used to define the exponential growth of data, both in structured and unstructured format. Data analysis is a method of cleaning, altering, learning valuable statistics, decision-making, and advising assumption with the help of many algorithms and procedures such as classification and clustering. In this paper we discuss about big data analysis using soft computing technique and propose how to pair two different approaches like evolutionary algorithm and machine learning approach also try to find better cause.

Kapil Patidar, Manoj Kumar, Sushil Kumar
Fuzzy Controller for Flying Capacitor Multicell Inverter

This article provides the analysis of flying capacitor multicell inverter with different levels, which shows that the THD decreases with the increase in number of levels. The topology called flying capacitor multicell inverter is an inverter which has the property of natural balancing of capacitors available in between each cell, this property of natural balancing makes this type of converter to have more number of levels. In this article, five, seven, and nine levels flying capacitor multicell inverters are analyzed. PD-PWM technique is used to control the RMS voltage at the output. To control the RMS output voltage of these converters, suitable fuzzy controllers are designed and the controlled outputs were verified. The verification of the fuzzy controller is carried out by keeping the reference voltage constant for some time and then varying the same after some time. The waveforms concerned with this verification are provided for validation.

P. Ponnambalam, M. Praveen Kumar, V. Surendar, G. Gokulakrishnan
Influence of Double Dispersion on Non-Darcy Free Convective Magnetohydrodynamic Flow of Casson Fluid

A numerical study on unsteady, MHD, chemically reacting, free convective, and non-Darcy flow of Casson fluid over a cone placed vertically is presented. The flow regime is influenced by double dispersion effect. The Crank–Nicolson technique is employed to solve the coupled nonlinear partial differential equations. Graphical results are obtained for various controlling parameters present in the governing equations of the problem. The graphical results are very useful to analyze the influence of various controlling parameters on Casson fluid flow. The average skin friction, rate of heat transfer coefficient, and rate of mass transfer coefficient for sundry parameters have been presented in tables below. Results indicate that enhancing the Casson fluid parameter tends to decelerate fluid flow by increasing the plastic dynamic viscosity, whereas it enhances the shear stress in the flow regime. The double dispersion effects play a vital role on sensitive controlling of energy consumptions and species concentration in a small region near to cone and plate.

A. Jasmine Benazir, R. Sivaraj
JUPred_SVM: Prediction of Phosphorylation Sites Using a Consensus of SVM Classifiers

One of the most important types of posttranslational modification is phosphorylation which helps in the regulation of almost all activities of the cell. Phosphorylation is the process of addition of a phosphate group to a protein after the process of translation. In this paper, we have used evolutionary information extracted from position-specific scoring matrices (PSSM) to serve as features for prediction. Support vector machine (SVM) was used the machine learning tool. The system was tested with an independent set of 141 proteins for which our system achieved the highest AUC score of 0.7327. Additionally, our system attained best results for 34 proteins in terms of AUC.

Sagnik Banerjee, Debjyoti Ghosh, Subhadip Basu, Mita Nasipuri
Fuzzy Logic-Based Gait Phase Detection Using Passive Markers

With the advancement in technology, gait analysis plays a vital role in sports, science, rehabilitation, geriatric care, and medical diagnostics. Identification of accurate gait phase is of paramount importance. The objective of this paper is to put forward a novel approach via passive marker-based optical approach that automatically recognizes gait subphases using fuzzy logic approach from hip and knee angle parameters extracted at RAMAN lab at MNIT, Jaipur. In addition to stance phase and swing phase, the approach is capable of detecting all the subphases such as initial swing, mid swing, and terminal swing, loading response, mid stance, terminal stance and preswing. The prototype of the system provides an effective and accurate gait phase that could be used for understanding patients’ gait pathology and in control strategies for active lower extremity prosthetics and orthotics. It is an automated, easy to use, and very cost-efficient yet reliable model.

Chandra Prakash, Kanika Gupta, Rajesh Kumar, Namita Mittal
Extraction of Retinal Blood Vessels and Optic Disk for Eye Disease Classification

The retina is the important and only part of the human body from which the blood vessel information can be clearly obtained. The information about blood vessels in the retina plays an important role in the finding and efficient treatment of diseases such as glaucoma, macular degeneration, degenerative myopia, diabetic retinopathy, etc. The structure of the retinal vessels is a significant way to predict the presence of eye diseases such as hypertension, diabetic retinopathy, glaucoma, hemorrhages, retinal vein occlusion, and neovascularization. Ophthalmologists find it difficult when the diameter and turns for the retinal blood vessel or shape of the optic disk structures are complicated or a huge number of eye images are acquired to be marked by hand, all of which eventually leads to error. Therefore, an automated method for retinal blood vessel extraction and optic disk segmentation, which preserves various vessel and optic disk characteristics, is presented in this work and is attractive in computer-based diagnosis. Here, we implement a new competent method for detection of diseases using the retinal fundus image. In this anticipated work the first step is the extraction of retinal vessels by graph cut technique. The retinal vessel information is then used to calculate approximately the position of the optic disk. These results are given to an ANN classifier for the detection and classification of diseases. By robotically identifying the disease from normal images, the workload and its costs will be reduced.

V. K. Jestin, Rahul R. Nair
Improved Local Search in Shuffled Frog Leaping Algorithm

Shuffled frog-leaping algorithm (SFLA) is comparatively a recent addition to the family of nontraditional population-based search methods that mimics the social and natural behavior of species (frogs). SFLA merges the advantages of particle swarm optimization (PSO) and genetic algorithm (GA). Though SFLA has been successfully applied to solve many benchmark and real-time problems it limits the convergence speed. In order to improve its performance, the frog with the best position in each memeplexes is allowed to slightly modify its position using random walk. This process improves the local search around the best position. The proposal is named improved local search in SFLA (ILS-SFLA). For validation, three engineering optimization problems are consulted from the literature. The simulated results defend the efficacy of the proposal when compared to state-of-the-art algorithms.

Tarun Kumar Sharma, Millie Pant
Shuffled Frog Leaping Algorithm with Adaptive Exploration

Shuffled frog leaping algorithm is a nature inspired memetic stochastic search method which is gaining the focus of researchers since it was introduced. SFLA has the limitation that its convergence speed decreases towards the later stage of execution and it also tends to stuck into local extremes. To overcome such limitations, this paper first proposes a variant in which a few new random frogs are generated and the worst performing frogs population are replaced by them. Experimental results show that a high number of replaced frogs does not always provide better results. As the execution progresses the optimized number of replaced frogs decreases. Based on the experimental observations, the paper then proposes another variant in which the number of replaced frogs adapts to the stage of the execution and hence provides the best results regardless of the stage of execution. Experiments are carried out on five benchmark test functions.

Jitendra Rajpurohit, Tarun Kumar Sharma, Atulya K. Nagar
Intuitionistic Trapezoidal Fuzzy Prioritized Weighted Geometric Operator: An Algorithm for the Selection of Suitable Treatment for Lung Cancer

Lung cancer is considered as the second most common cancer and is the major cause of cancer deaths over the globe. Due to advancement in the field of medical science, different types of treatments or therapies are made available for the treatment of the disease. Multiple attribute group decision making (MAGDM) with the help of intuitionistic trapezoidal fuzzy (ITrF) information has wide applications in decision-making processes especially in the field of medical science. In this paper, we use the concept of MAGDM from a geometric point of view for selection of the most appropriate treatment from the available set of treatments for lung cancer as per the attributes. Once the disease has been diagnosed, with the help of the algorithm of intuitionistic trapezoidal fuzzy prioritized weighted geometric (ITFPWG) operators, we can select the most suitable treatment for lung cancer. Finally, we demonstrate the method by taking a hypothetical case study.

Kumar Vijay, Arora Hari, Pal Kiran
Fuzzy Controller for Reversing Voltage Topology MLI

Multilevel inverter of reversing voltage topology has emerged recently as a very important technology in the area of medium-voltage high power energy control, due to lower EMI, requirement of less number of semiconductor power devices with less blocking voltage, lower THD percentage in output voltage, and less stress on insulation. This topology overcomes the disadvantages that a normal multilevel inverter has, like increased number of components, complex power bus structure in some topologies, and voltage balancing problem at neutral point. In this paper, the multilevel inverter with reversing voltage is implemented (which was previously proposed). This topology of inverter is first simulated using MATLAB simulation in open loop, and then PWM technique is introduced to have a control over the output RMS voltage; for these topologies the THD is analyzed. Then closed-loop control is implemented using fuzzy logic. The open-loop configuration of the circuit is realized in hardware and the results are analyzed.

P. Ponnambalam, B. Shyam Sekhar, M. Praveenkumar, V. Surendar, P. Ravi Teja
Image Quality Assessment-Based Approach to Estimate the Age of Pencil Sketch

In recent years, the increasing interest in the evaluation of biometric systems security has led to the creation of numerous and very diverse initiatives focused on this major field of research. After the occurrence of crime a skilled pencil sketch artist draws the sketches based on the description of the eyewitness. The accuracy of the skill depends on the description given by the eyewitness and the skill of the artist. After the sketch is drawn finding its age is a challenging task. In this paper we apply image quality assessment (IQA) to find the age of a pencil sketch drawn by a skilled pencil sketch artist. The database considered FGNET pencil sketch database that consists of 34 pencil sketches varying from 6 to 61 years. The IQA parameters considered are peak signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), maximum difference (MD), average departure (AD), normalized absolute error (NAE), total edge difference (TED), structural similarity index (SSI), and mean square error (MSE). The significance of this analysis is given a pencil sketch that we can quickly and effectively calculate its age and hence help the law enforcement agency to apprehend the criminals in a very short time interval. Demo version of the code along with input pencil sketches and output obtained can be downloaded from https://goo.gl/zYq3cI.

Steven Lawrence Fernandes, G. Josemin Bala
Self-Similarity Descriptor and Local Descriptor-Based Composite Sketch Matching

Composite sketching belongs to the forensic science where the sketches are drawn using freely available composite sketch generator tools. Compared to pencil sketches, composite sketches are more effective because it consumes less time. It can be easily adopted by people across different regions; moreover, it does not require any skilled artist for drawing the suspects faces. Software tool used to generate the faces provides more features which can be used by the eyewitness to provide better description, which increases the clarity of the sketches. Even the minute details of the eyewitness description can be captured with great accuracy, which is mostly impossible in pencil sketches. Now that a composite sketch is provided, it has to be identified effectively. In this paper we have analyzed two state-of-the-art techniques for composite sketch image recognition: Self-similarity descriptor (SSD)-based composite sketch recognition and local descriptors (LD)-based composite sketch recognition. SSD is mainly used for developing a SSD dictionary-based feature extraction and Gentle Boost KO classifier-based composite sketch to digital face image matching algorithm. LD is mainly used for multiscale patch-based feature extraction and boosting approach for matching composites with digital images. These two techniques are validated on FACES and IdentiKit databases. From our analysis we have found that SSD descriptor works better than LD. Using SSD method we obtained the results for FACES (ca) as 51.9 which is greater when compared to LD which gives a result of 45.8. Similarly, using SSD, values of 42.6 and 45.3 for FACES (As) and IdentiKit (As), respectively, are obtained which are much better than the values of 20.2 and 33.7 for FACES (As) and IdentiKit (As), respectively, using LD method.

Steven Lawrence Fernandes, G. Josemin Bala
Multi-objective Colliding Bodies Optimization

Kaveh and Mahdavi proposed a new metaheuristic method in 2014 known as colliding bodies optimization (CBO). The algorithm is based on the principle of collision between bodies (each has a specific mass and velocity). The collision makes the bodies move toward the optimum position in the search space. This paper deals with the multi-objective formulation of CBO termed as MOCBO. Simulation studies on benchmark functions Schaffer N1, Schaffer N2, and Kursawe have demonstrated the superior performance of the MOCBO over multi-objective particle swarm optimization (MOPSO) and non-dominated sorting genetic algorithm II (NSGA-II). The performance analysis are carried out for the proposed and benchmark algorithms in identical platforms using response matching between obtained and true Pareto front; the convergence matric, diversity matric, and computational efficiency achieved over fifty independent runs.

Arnapurna Panda, Sabyasachi Pani
Printed Hindi Characters Recognition Using Neural Network

To recognize the Hindi characters using perceptron learning rule an algorithm is modeled and simulated in this paper. This model maps a matrix of pixels into characters on scanned images. In this paper perceptron learning rule is modeled based on mapping of input and output matrix of pixels. Perceptron learning rule uses an iterative weight adjustment that is more powerful than other learning rules. The perceptron uses threshold output function and the McCulloch–Pitts model of a neuron. Their iterative learning converges to correct weight vector, i.e., the weight vector that produces the exact output value for the training input pattern. For modeling and simulation, those Hindi characters are used which are similar to some of numeric numbers. To model and simulate the algorithm, Hindi characters are taken in form of the 5 × 3 matrix of pixels.

Vaibhav Gupta, Sunita
Rendering Rigid Shadows Using Hybrid Algorithm

We present a precise and efficient hybrid algorithm for rendering rigid shadows. Our algorithm performs the combination of two major shadow rendering algorithms: shadow map and shadow volume. In our approach the first step is performed by the application of renowned shadow map algorithm which generates the shadow with aliased edges. The result generated is then passed to identify the shadow pixels. Then shadow volume algorithm is applied to generate a crisp-edged shadow of object. Shadow volume is performed only at shadow pixels to minimize the time consumption in rendering shadows. The identification of shadow pixels depends upon the hardware functionality for which a graphics processor is required. Algorithm for implementing the hybrid approach is mentioned with results in the paper.

Nitin Kumar, Sugandha Agarwal, Rashmi Dubey
Analysis of Role-Based Access Control in Software-Defined Networking

Lack of interoperability of traditional networking architecture reduces the network’s speed, reliability, and security. Software-defined networking has decoupled control plane and data plane in order to configure the existing architecture. The OpenFlow protocol used for communication in software-defined network has been discussed in this paper [1, 2]. The main aim of the research paper is to implement role-based control model in the software-defined networking environment in order to provide more security [3, 4]. The paper also outlines about the reduction in packet loss and latency using it in software-defined network environment [5, 6].

Priyanka Kamboj, Gaurav Raj
Analysis and Comparison of Regularization Techniques for Image Deblurring

Image deblurring or deconvolution problems are referred as inverse problems which are usually ill-posed and are quite difficult to solve. These problems can be optimized by the use of some advanced statistical methods, i.e., regularizers. There is, however, a lack of comparisons between the advanced techniques developed so far in order to optimize the results. This paper focuses on the comparison of two algorithms, i.e., augmented Lagrangian method for total variation regularization (ALTV) and primal-dual projected gradient (PDPG) algorithm for Beltrami regularization. It is shown that primal-dual projected gradient Beltrami regularization technique is better in terms of superior image quality generation while taking relatively higher execution time.

Deepa Saini, Manoj Purohit, Manvendra Singh, Sudhir Khare, Brajesh Kumar Kaushik
An Approach to Solve Multi-objective Linear Fractional Programming Problem

In this paper, an approach of hybrid technique is presented to derive Pareto optimal solutions of a multi-objective linear fractional programming problem (MOLFPP). Taylor series approximation along with the use of a hybrid technique comprising both weighting and $$ \epsilon $$ϵ-constraint method is applied to solve the MOLFPP. It maintains both priority and achievement of possible aspired values of the objectives by the decision maker (DM) while producing Pareto optimal solutions. An illustrative numerical example is discussed to demonstrate the proposed method and to justify the effectiveness, the results so obtained are compared with existing fuzzy max–min operator method.

Suvasis Nayak, A. K. Ojha
A Fuzzy AHP Approach for Calculating the Weights of Disassembly Line Balancing Criteria

Disassembly of outdated and previously consumed product takes place in field of remanufacturing, recycling, reusing and disposal. The disassembly lines have become the first choice for disassembly of the product that has been consumed previously. Disassembly line should be designed and balanced properly so that it can work as efficiently as possible. There are many different criteria in the disassembly lines for selecting the parts those are to be removed. The problem of disassembly line balancing is based on these different criteria. In this paper, the weights of these criteria have been evaluated. A fuzzy analytical hierarchy process (fuzzy AHP)-based approach has been applied to calculate the weight of each criterion. With the help of the weight of these criteria, the tasks can be assigned to workstations with different precedence constraint and cycle time limit.

Shwetank Avikal, Sanjay Sharma, J. S. Kalra, Deepak Varma, Rohit Pandey
An Efficient Compression of Encrypted Images Using WDR Coding

This paper presents a novel scheme for the compression of encrypted images through which we can efficiently compress the encrypted images without compromising either the compression efficiency or the security of the encrypted images. In the encryption phase, content owner encrypts the original image using pseudorandom numbers which are derived from a secret key. Then, the channel provider without the knowledge of secret key can compress the encrypted image. For compression, encrypted image is decomposed into subimages and each of these subimages is compressed independently using quantization and wavelet difference reduction coding technique. Then the compressed data obtained from all the subimages is regarded as the compressed bit stream. At receiver side, a reliable decompression and decryption technique is used to reconstruct the image from compressed bit stream. To evaluate the performance, the proposed technique has passed through a number of test cases such as compression ratio (CR) and peak signal-to-noise ratio (PSNR). All the analysis and experimental results clearly show that the proposed encryption-then-compression technique is reckon secure and shows good compression performance. To show the efficiency of proposed work it is compared with a well-known scheme on compression of encrypted images and experimental results show better compression performance with improved image quality.

Manoj Kumar, Ankita Vaish
SVD-Based Fragile Reversible Data Hiding Using DWT

In today’s growing world of digital technology, access to the multimedia content is very easy and for some sensitive applications such as medical imaging, military system, legal problems, it is very essential to not only reinstate the original media without any loss of information but also to increase content’s security. Reversible data hiding is an approach to extract the information embedded covertly as well as the host image. In this paper, we have proposed a novel hybrid reversible watermarking scheme based on DWT and SVD. In this scheme, we have provided double layer of security by utilizing the multiresolution property of wavelet and strong features of SVD. In the proposed scheme, watermark is embedded into the singular values of all high-frequency subbands obtained by wavelet decomposition of the original image and at the time of extraction, watermark bits are used along with singular vectors to obtain the original image. Our scheme provides high security even after the extraction of watermark, without knowing the extraction algorithm, original image cannot be recovered in its entirety. The proposed scheme is tested on various test images and the obtained results after applying different performance metrics such as PSNR and UIQI show the effectiveness of the proposed scheme.

Manoj Kumar, Smita Agrawal, Triloki Pant
Static Economic Dispatch Incorporating UPFC Using Artificial Bee Colony Algorithm

Static economic dispatch is a real-time problem in power system network. Here, the real power output of each generating unit is calculated with respect to forecasted load demand over a time horizon while satisfying the system constraints. This paper explains the impact of unified power flow controller (UPFC) in static economic dispatch (SED) using artificial bee colony (ABC) algorithm. UPFC is a converter (shunt and series)-based FACTS device, which can control all the parameters in a transmission line individually or simultaneously. ABC algorithm that imitates the foraging behavior of honey bees is used as an optimization tool. The impact of UPFC in reducing the generation cost, loss, and improving voltage profile, power flow are demonstrated. The studies are carried out in an IEEE 118 bus test system and a practical South Indian 86 bus utility.

S. Sreejith, Velamuri Suresh, P. Ponnambalam
Edge Preservation Based CT Image Denoising Using Wavelet and Curvelet Transforms

Computed tomography (CT) is a well-known medical radiological tool to diagnose the human body. Radiation dose is one of the major factors, which affects the quality of CT images. High radiation dose may improve the quality of image in terms of reducing noise, but it may be harmful for the patients. Due to low radiation dose, reconstructed CT images are noisy. To improve quality of noisy CT image, a postprocessing method is proposed. The goal of proposed scheme is to reduce the noise as much as possible by preserving the edges. The scheme is divided into two phases. In first phase, wavelet transform based denoisng is performed using bilateral filtering and thresholding. In second phase, a method noise thresholding based on curvelet transform is performed using the outcome of first phase. The proposed scheme is compared with existing methods. From experimental evaluation, it is observed that the performance of proposed scheme is superior to existing methods in terms of visual quality, PSNR and image quality index (IQI).

Manoj Kumar, Manoj Diwakar
Critical Analysis of Clustering Algorithms for Wireless Sensor Networks

The scientific and industrial community increased their attention on wireless sensor networks (WSNs) during the past few years. WSNs are used in various critical applications like disaster relief management, combat field reconnaissance, border protection, and security observation. In such applications a huge number of sensors are remotely deployed and have cooperatively worked in unaccompanied environments. The disjoint groups are formed from these sensor nodes and such nonoverlapping groups are known as clusters. Clustering schemes have proven to be effective to support scalability. In this paper, authors have reported a detailed analysis on clustering algorithms and have outlined the clustering schemes in WSNs. We also make a comparative analysis of clustering algorithms on the basis of different parameters like cluster stability, cluster overlapping, convergence time, failure recovery, and support for node mobility. Moreover, we highlight the various issues in clustering of WSNs.

Santar Pal Singh, Kartik Bhanot, Sugam Sharma
Nelder-Mead and Non-uniform Based Self-organizing Migrating Algorithm

Self-organizing migrating algorithm (SOMA) is a novel approach capable to solve almost all type of functions. SOMA is highly effective evolutionary optimization technique and has proved its efficiency in solving many real-life applications. This paper presents a new optimization technique M-NM-SOMA to solve global optimization problems. In the proposed algorithm, SOMA is hybridized with Nelder-Mead method as crossover operator and non-uniform mutation operator in order to avoid premature convergence and keep the diversity of the population. The main feature of this algorithm is that it works for very low population size. To authenticate the efficiency of the proposed algorithm, it is tested on 17 benchmark test problems taken from the literature and the obtained results are compared with the results of other existing algorithms. Numerical and graphical results show that M-NM-SOMA has better global search ability and is very efficient, reliable, and accurate in comparison with other algorithms.

Dipti Singh, Seema Agrawal
Surrogate-Assisted Differential Evolution with an Adaptive Evolution Control Based on Feasibility to Solve Constrained Optimization Problems

This paper presents an adaptive evolution control based on the feasibility of solutions, which is used with the nearest-neighbor regression surrogate model, to approximate the objective function value and the sum of constraint violation when solving constrained numerical optimization problems. The search algorithm used is the “differential evolution with combined variants’’ (DECV) and the constraint-handling technique adopted is the set of feasibility rules. The approach is compared against one state-of-the-art algorithm that employs the same surrogate model with an adaptive evolution control, as well. Twenty-four well-known test problems are solved in the experiments. From the obtained results, it is found that the evolution control based on the feasibility of solutions reduces the number of evaluations in the expensive model, particularly in problems with inequality constraints.

Mariana-Edith Miranda-Varela, Efrén Mezura-Montes
PSO-TVAC-Based Economic Load Dispatch with Valve-Point Loading

In this paper, an effective and reliable variant of particle swarm optimization with time-varying acceleration coefficients (PSO-TVAC) technique is proposed for the economic load dispatch problem considering valve-point loading effect. Main objective of the economic load dispatch (ELD) problem is to minimize the fuel cost by allocating the generation of the committed units subjected to the equality and inequality constraints. Equation of the economic dispatch objective function gets modified with the addition of a new parameter which represents the effect of valve-point loading. While exploring the use of PSO and its other variants to the economic dispatch problem, a number of research works have not considered the transmission losses properly. This paper represents the usefulness of the proposed PSO-TVAC algorithm to significantly reduce the fuel cost while taking into account the effect of transmission losses along with the non-convex characteristic due to valve loading. The results have been demonstrated for three-generator and ten-generator test systems.

Parmvir Singh Bhullar, Jaspreet Kaur Dhami
A Heuristic Based on AHP and TOPSIS for Disassembly Line Balancing

Disassembly lines have become one of the most suitable ways for the disassembly of large products or small products in large quantities for efficient working of disassembly line; its design and balancing is prudent. In disassembly lines, task assignment in appropriate schedule is necessary for designing and balancing the line. In this paper, a heuristic based on multi criteria decision-making (MCDM) technique has been proposed for assignment of tasks to the disassembly workstations. In the proposed heuristic, Analytical Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) has been used for the prioritizing of task for the assignment to workstations. The proposed heuristic has been compared to other heuristic and it has been found that it performs well and gives sufficiently better results.

Shwetank Avikal
Use of Intuitionistic Fuzzy Time Series in Forecasting Enrollments to an Academic Institution

Fuzzy time series (FTS) forecasting models are widely applicable when the information is imprecise and vague. The concept of fuzzy set (FS) is generalized to intuitionistic fuzzy set (IFS) and proved that it is more suitable and powerful tool to deal with real life problems under uncertainty as compared to FSs theory. In this study, first we extended the definitions of FTS to the IFSs and proposed the notion of intuitionistic FTS. Further, the presented concept of intuitionistic FTS is applied to develop a forecasting model under uncertainty. Then, it is applied to the benchmark problem of the historical enrollments data of University of Alabama and the obtained results are compared with the results obtained by existing methods to show its effectiveness as compared to FTS.

Bhagawati Prasad Joshi, Mukesh Pandey, Sanjay Kumar
An Improved Privacy-Preserving Public Auditing for Secure Cloud Storage

Distributed computing or Cloud Computing could be a net fundamentally based rising and rapidly developing model. Inside which client will store their insight remotely and revel in the on-interest top quality applications and administrations from a mutual pool of configurable processing assets, while not the weight of local source and upkeep. Hence, accuracy and security of information could be a prime concern. Physical ownership of the outsourced information is limited to clients. Making certain trustworthiness could be a troublesome undertaking, especially for clients with confined figuring assets. Additionally, client should be prepared to utilize the distributed storage on the grounds that it is local stockpiling, without apprehension concerning the wish to review its honesty. In this way, allowing public auditability for conveyed stockpiling is of basic hugeness; so, cloud tenants or cloud users will use a third party auditor (TPA) to envision the uprightness of saved data and be clear. To solidly present a decent outsider evaluator, the inspecting strategy should usher in no new vulnerabilities toward client learning protection, and present no further online weight to client.

Mukund N. Kulkarni, Bharat A. Tidke, Rajeev Arya
A Simulation Study with Mobility Models Based on Routing Protocol

Mobility is an inherent character of wireless Ad Hoc Networks. These networks are characterized by node mobility and requires infrastructure. In the earlier decade, an important amount of study was dedicated to develop mobility models appropriate for estimating the show of Wireless Ad Hoc Network. Simulation is an important mechanism for the indication of fresh concepts in Wireless Ad Hoc networking. The origin of Ad Hoc networks depends on the kind of protocol for estimation, which is previously to be accomplished in a real-world situation. In pre estimation of protocol to support Ad Hoc network, the protocol must be according to common environment. The protocol is uncontrolled within broadcast area, where sub buffer rooms are specified for storage space of communication, information transfer model, and realistic movement of node. Accessing of network is basic purpose to operate complex networking system. The existing mobility model is distinguished from synthetic in term of their experimental and statistical characteristics. The main objective of this paper is to define different mobility models in context to devise more suitable choice for performance evaluation using routing protocols. In this paper, a relative analysis of mobility models existing, are discussed on a variety of simulation setting parameter like packet delivery rate (PDR), Throughput, Average End to End Delay.

Arvind Kumar Shukla, C. K. Jha, Rajeev Arya
Genetic-Based Weighted Aggregation Model for Optimization of Student’s Performance in Higher Education

Most of the real-life problems are optimization problems, where the aim is to develop a model that optimizes certain output criteria. Education domain though a nonprofit sector intends to optimize its functioning by adopting procedures that tend toward knowledge building. Increasing the student’s performance has always been an area of interest among these education organizations. The paper exemplifies the usage of Binary encoded genetic algorithm to model student’s performance in a course pertaining to higher education. It gives significance to the variables identified responsible, for affecting the performance of the students in the course under study. Adopting such knowledge-based activities may help the organizations to eventually establish themselves as a Knowledge Centric Higher Education Organization.

Preeti Gupta, Deepti Mehrotra, Tarun Kumar Sharma
Improved Convergence Behavior by Using Best Solutions to Enhance Harmony Search Algorithm

Harmony search is an emerging meta-heuristic optimization algorithm inspired from music improvisation processes, and able to solve different optimization problems. In the previous studies harmony search is improved by information of the best solution. This increases speed of coverage to the solution but chance of immature coverage to the local optimum increases by this way. Thus, this study uses information from the p of the best solutions to accelerate coverage to optimal solution while avoiding immature coverage. Simulation results show the proposed approach applied for different numerical optimization problems has better performance than previous approaches.

Ali Maroosi, Ravie Chandren Muniyandi
MVM: MySQL Versus MongoDB

The Literature survey exhibits lack of quality research work in field of database(s) when it comes to task of performing comparison between real-world database entities. In this research paper, we have compared and contrast between the two open-source RDBMS (Relational database management system): MySQL and MongoDB. Comparison between two databases was done on basis of database operations, such as insertion, deletion, selection, projection et al. It is true that selection of the database in application depends entirely on database operations and we observed that in some database operations and applications MySQL performed better than MongoDB whereas in some applications MongoDB resulted in better performance. For the evaluation and analysis we obtained the real-time traces of diabetic dataset comprising of 100,000 records with 51 columns and put it to test for efficiency and performance to both the RDBMS and in the end the database operation execution time was recorded and analyzed.

Purva Grover, Rahul Johari
A Novel Single Band Microstrip Antenna with Hexagonal Fractal for Surveillance Radar Application

This paper aims at developing a microstrip patch antenna with hexagonal fractal pattern for a ground-based surveillance radar. The antenna performs in the X-band in between 8 and 9.5 GHz, applicable for short-range search. The ground plane has been varied in the design to observe its effect in the gain and VSRW parameters of the antenna. The final antenna design works at 9.2 GHz (X-band range) with a return loss of −28.63 dB after various stages of slotting in the ground plane depicting the effect of modification of ground plane parameters in the design.

Shailendra Kumar Dhakad, Neeraj Kumar, Ashwani Kr. Yadav, Shashank Verma, Karthik Ramakrishnan, Jyotbir Singh
Optimization of Hyperspectral Images and Performance Evaluation Using Effective Loss Algorithm

An effective lossy algorithm for compressing hyperspectral images using singular value decomposition (SVD) and discrete cosine transform (DCT) has been proposed. A hyperspectral image consists of a number of bands where each band contains some specific information. This paper suggests compression algorithms that compress the hyperspectral images by considering image data, band by band and compress each band employing SVD and DCT. The compression performance of the resultant images is evaluated using various objective image quality metrics.

Srinivas Vadali, G. V S. R. Deekshitulu, J. V. R. Murthy
Comparative Study of Bakhshālī Square Root Method with Newton’s Iterative Method

The study is aimed at comparing the convergence of Newton’s iterative method and Bakhshālī square root (BSR) method. It is shown that BSR procedure naturally leads to a superfast computation of the square root problems under study. It was then concluded that of the two methods considered, BSR method is the most effective. It has been shown by comparing Newton’s iterative method with BSR method with a suitable example.

Nidhi Handa, Tarun Kumar Gupta, S. L. Singh
Dynamic Resource Allocation for Multi-tier Applications in Cloud

Increasing demand for computing resources and widespread adaption of service-oriented architecture has made cloud as a new IT delivery mechanism. Number of cloud providers offer computing resources in the form of virtual machines to the cloud customers based on business requirements. Load experienced by the present business applications hosted in cloud are dynamic in nature. This creates a need for a mechanism which allocates resources dynamically to the applications in order to minimize performance degradations. This paper presents a mechanism which dynamically allocates the resources based on load of the application using vertical and horizontal scaling. Cloud environment is set up using Xen cloud platform and multi-tier web application is deployed on virtual machines. Experimental study conducted for various loads show that proposed mechanism ensures the response time is within the acceptable range.

Raghavendra Achar, P. Santhi Thilagam, Meghana, B. Niha Fathima Haris, Harshita Bhat, K. Ekta
Comparison of Image Restoration and Segmentation of the Image Using Neural Network

In present day almost all of the image restoration method suffer from weak convergence properties. Also for Point Spread Function (PSF), some methods make restrictive assumptions. Some original images restrict algorithms portability to many applications. Current situation is using deburring filters, images are restored without the information of blur and its value. In this paper, method of artificial intelligence is implemented for restoration problem in which images are degraded by a blur function and corrupted by random noise. This methodology uses back propagation network with gradient decent rule which consists of three layers and uses highly nonlinear back propagation neuron for image restoration to get a high quality of restored image and attains fast neural computation, less complexity due to the less number of neurons used and quick convergence without lengthy training algorithm. The basic performance of the neural network based restoration along with segmentation of the image is carried out.

B. Sadhana, Ramesh Sunder Nayak, B. Shilpa
Performance Evaluation of PCA and ICA Algorithm for Facial Expression Recognition Application

In everyday interaction, our face is the basic and primary focus of attention. Out of many human psycho-signatures, the face provides a unique identification of a person by the virtue of its size, shape, and different expressions such as happy, sad, disgust, surprise, fear, anger, neutral, etc. In a human computer interaction, facial expression recognition is an interesting and one of the most challenging research areas. In the proposed work, principle component analysis (PCA) and independent component analysis (ICA) are used for the facial expressions recognition. Euclidean distance classifier and cosine similarity measure are used as the cost function for testing and verification of the images. Japanese Female Facial Expression (JAFFE) database and our own customized database are used for the analysis. The experimental result shows that ICA provides improved facial expression recognition in comparison with PCA. The PCA and ICA provides detection accuracy of 81.42 and 94.28 %, respectively.

Manasi N. Patil, Brijesh Iyer, Rajeev Arya
Modified PSO Algorithm for Controller Optimization of a PFC Ćuk Converter-Fed PMBLDCM Drive

This paper presents a modified particle swarm optimization (PSO) algorithm for the selection of controller parameter for a Ćuk converter-fed PMBLDCM drive. The main objective of the proposed algorithm for controller optimization is to achieve power factor correction (PFC) at AC mains of the PMBLDCM drive. The PSO is modified to achieve the above objectives for a PMBLDC motor rated as 1.01 KW, 3000 rpm, 310 V, and 3.2 Nm. The complete drive is designed, modeled, and its performance is simulated in MATLAB-Simulink. The simulated results of the drive are presented to demonstrate the desired power quality at AC mains along with the desired speed and torque for the searched values of the controller parameters.

Rinku K. Chandolia, Sanjeev Singh
Speed Controller Optimization for PMSM Drive Using PSO Algorithm

This paper presents the use of particle swarm optimization (PSO) algorithm modified for the search of optimized gain values of speed controller for a permanent magnet synchronous motor (PMSM) drive. The PSO is modified to generate particles in complete search space and have a multiobjective problem involving speed as well as torque error as independent variables of fitness function so as to minimize these errors. The proposed algorithm is modeled and simulated in MATLAB/Simulink environment. The obtained results are presented to demonstrate the effectiveness of the modified PSO algorithm for the desired speed control of the PMSM drive.

Paramjeet Singh Jamwal, Sanjeev Singh
Parallelization of Simulated Annealing Algorithm for FPGA Placement and Routing

This paper aims to parallelize the simulated annealing algorithm used for the placement of circuit elements in the logic blocks of an FPGA. It intends to introduce the simulated annealing algorithm and the placement problem, analyzes the complexities involved, and justifies the use of simulated annealing as the algorithm for placement ahead of other algorithms. It explains the accuracy of the simulated annealing algorithm using a simple example which, also aims to explore parallelization techniques currently in use, such as parallel moves, area-based partitioning, Markov chains, and suggests possible improvements in the same using a combination of the above, using GPGPUs and investigate further the effects of move biasing. Also, the VPR (versatile placement and routing) CAD tool is introduced and key functions related to placement are explained [1]. The use of GPGPUs to achieve the required parallelism and speedup is discussed, along with the difficulties involved in implementing the same.

Rajesh Eswarawaka, Pavan Kumar Pagadala, B. Eswara Reddy, Tarun Rao
Review of Image Acquisition and Classification Methods on Early Detection of Skin Cancer

The word cancer is enough to send many people into a spin. However, most types of skin cancer have a very favorable prognosis. They are common and very treatable. Melanoma is the skin cancer of most concern. Minor skin cancers often appear as a spot or sore that will not heal. Melanomas may arise in a preexisting skin mole that has become darker or changed in appearance. More often they will appear as a new mole or an unusual freckle. Nearly all skin cancers are related to excessive UV radiation. The depletion of the earth’s ozone layer also appears to be increasing the risk of developing skin cancer. With melanoma, family history also seems to be a factor. Detection at the melanoma in situ stage provides the highest curable rate for melanoma. The aim of this paper is to provide the summary of all the available methods and stages of melanoma identification.

M. Reshma, B. Priestly Shan
Enhancement of Mobile Ad Hoc Network Security Using Improved RSA Algorithm

The RSA algorithm is used in different communication networks in order to ensure data confidentiality. This paper proposed an improvement in RSA algorithm for the increased security enhancement in mobile ad hoc network. The proposed method is best for small messages and also applicable for much increased data security in different types of network including new generation network. For large volume of data, we have added DES with proposed RSA and getting better and secure transmission. We have used key length up to 2048 bits considering better security, computing speed, and processor condition. Key length can be increased depending upon the conditions.

S. C. Dutta, Sudha Singh, D. K. Singh
Empirical Study of Grey Wolf Optimizer

In this paper, the authors empirically investigate performance of the grey wolf optimizer (GWO). A test suite of six non-linear benchmark functions, well studied in the swarm and the evolutionary optimization literature, is selected to highlight the findings. The test suite contains three unimodal and three multimodal functions. The experimental results demonstrate the advantages and weaknesses of the GWO. In case of unimodal problems, initially it hastens towards the optimal solution but soon slows down because of the diversity problem. A similar behaviour is seen in case of multimodal problems with a difference that because of its behaviour it easily sticks to local optima, loses its diversity and stops any further progress. The reason is that it lacks information sharing in the pack. This insight led the authors to propose a modified GWO called the modified grey wolf optimizer (MGWO). An empirical study of the proposed algorithm MGWO shows its promising performance as the obtained results are superior to the GWO for all the test functions.

Avadh Kishor, Pramod Kumar Singh
Evaluation of Huffman-Code and B-Code Algorithms for Image Compression Standards

To reduce the quantity of data without excessively reducing the quality of the multimedia data is called compression. Compressed multimedia data are faster for transition and storing as compared to the original uncompressed multimedia data. For JPEG and JPEG 2000 images there are various techniques and standards for data compression. These standards consist of different functions such as color space conversion and entropy coding. Huffman codes and B-codes are normally used in the entropy coding phase.

Chanda, Sunita Singh, U. S. Rana
Backmatter
Metadaten
Titel
Proceedings of Fifth International Conference on Soft Computing for Problem Solving
herausgegeben von
Millie Pant
Kusum Deep
Jagdish Chand Bansal
Atulya Nagar
Kedar Nath Das
Copyright-Jahr
2016
Verlag
Springer Singapore
Electronic ISBN
978-981-10-0448-3
Print ISBN
978-981-10-0447-6
DOI
https://doi.org/10.1007/978-981-10-0448-3

Premium Partner