Skip to main content
main-content

Über dieses Buch

This volume contains 69 papers presented at ICICT 2015: International Congress on Information and Communication Technology. The conference was held during 9th and 10th October, 2015, Udaipur, India and organized by CSI Udaipur Chapter, Division IV, SIG-WNS, SIG-e-Agriculture in association with ACM Udaipur Professional Chapter, The Institution of Engineers (India), Udaipur Local Centre and Mining Engineers Association of India, Rajasthan Udaipur Chapter. This volume contains papers mainly focused on ICT for Managerial Applications, E-governance, IOT and e-Mining.

Inhaltsverzeichnis

Frontmatter

A Novel Methodology to Detect Bone Cancer Stage Using Mean Intensity of MRI Imagery and Region Growing Algorithm

Cancer has been a plague in our society since the dawn of recorded history. The radical surgical resection represents the only chance for cure but, unfortunately it is possible in only 15 % of patients. Even at experienced centers the 5 year survival rates for the most favorable patients who undergo resection and adjuvant therapy are less than 20 %. In this paper, a methodology is proposed for identifying the bone cancer affected part. The methodology involves scanned images captured at various locations of the human body which are collected from different diagnostic labs. Based on region growing algorithm, the segmentation is carried out to analyze the tumor part through which the intensity of cancer and the stage at which the tumor relies is empirically calculated.

K. E. Balachandrudu, C. Kishor Kumar Reddy, G. V. S. Raju, P. R. Anisha

BCI for Comparing Eyes Activities Measured from Temporal and Occipital Lobes

Brain–computer interface (BCI) is a system which communicates between user and machine. It provides a communication channel without using muscular activity. BCI uses brain rhythms as input of BCI system which are recorded by invasive or non-invasive BCI. Brain rhythms are always generated by brain when we are thinking, sleeping, deep sleeping, working and non-working states. In this paper, analysis of data recorded from electrodes placed at occipital and temporal lobes and comparison between both lobes using open and close eyes data.

Sachin Kumar Agrawal, Annushree Bablani, Prakriti Trivedi

An Adaptive Edge-Preserving Image Denoising Using Block-Based Singular Value Decomposition in Wavelet Domain

Image denoising is a quite active research area in the domain of image processing. The essential requirement for a good denoising method is to preserve significant image structures (e.g., edges) after denoising. Wavelet transforms and singular value decomposition (SVD) have been independently used to achieve edge-preserving denoising results for natural images. Numerous denoising algorithms have utilized these two techniques independently. In this paper, a novel technique for edge-preserving image denoising, which combines wavelet transforms and SVD, is proposed. It is adaptive to the inhomogeneous nature of natural images. The multiresolution representation of the corrupted image in wavelet domain is obtained through the application of a discrete wavelet transform to it. A block-SVD based edge-adaptive thresholding scheme which relies on estimation of noise level is employed to reduce the noise contents while preserving significant details of the original version. Comparison of the experimental results with other state-of-the-art methods reveals the fact that the proposed approach achieves very impressive gain in denoising performance.

Paras Jain, Vipin Tyagi

Performance Analysis of Voltage Controlled Ring Oscillators

The voltage controlled ring oscillator (VCO) is a critical and necessary component in data communication systems and clock recovery circuits. It is basically an oscillator, whose output frequency is controlled by the input control voltage. Finally, it has been concluded that the selected VCOs have large varying characteristics. This paper comprehensively analyzes different ring VCOs and their performances have been summarized in terms of frequency range, power consumption and bandwidth, which enable the designers to select the most appropriate ring VCO for specific applications.

Shruti Suman, K. G. Sharma, P. K. Ghosh

Modified Design of Integrated Ultra Low Power 8-Bit SAR ADC Architecture Proposed for Biomedical Engineering (Pacemaker)

An energy efficient modified architecture of 8-bit 100 kS/s SAR ADC for the biomedical implant pacemaker is presented in this paper. With the stringent need to prolong the battery life of portable battery operated biomedical implants such as pacemaker, an improved architecture of SAR ADC is proposed which ensures improved performance than other reported SAR ADC architectures. The ADC employed in the pacemaker drains huge amount of power from battery during the time of analog to digital signal conversion. The work presents ADC design which ensures the microwatt operation which in turn makes the pacemaker to run on small battery. The ADC is realized in 180 nm CMOS technology operated at 1.8 V. The power consumption and energy efficiency reported during simulation are 2.5 µW and 0.77 pJ/state having precision of 6.68 bits.

Jubin Jain, Vijendra K. Maurya, Rabul Hussain Laskar, Rajeev Mathur

A Novel Symmetric Key Cryptography Using Dynamic Matrix Approach

One of the most challenging aspects in today’s information and communication technology is data security. Encryption is one of the processes to secure the data before it is communicated. It is a process of altering an intelligible form of data into an unintelligible form based on encryption algorithm using a key. To get the original data back, a decryption process is used. Encryption algorithms are of two types: symmetric key encryption algorithm which uses same key and asymmetric key encryption algorithm which uses different keys for encrypting and decrypting the data. In this paper, an algorithm has been proposed based on several factors which are dynamic square matrix creations depending on the length of information, performing ASCII conversions, applying XNOR operation and then transposing the matrix to enhance the security and efficiency of data. The performance of proposed algorithm is compared with existing algorithm and found to be superior on various parameters.

Neetu Yadav, R. K. Kapoor, M. A. Rizvi

Experimenting Large Prime Numbers Generation in MPI Cluster

Generating large prime number is a time consuming problem. It is useful in key generation in network security. The problem of generating prime number is easy to parallelize. Use of high performance computing (HPC) facilities can be applied for getting faster results. Embarrassing parallel pattern is applied to categories of algorithms where concurrency is explicit. We have conducted experiments on sequential and parallel approaches of prime numbers generation. In this paper, our objective is to analyze time taken in generating large prime numbers using multicore cluster. The result could be reusable in estimating time required for key generation in cryptography.

Nilesh Maltare, Chetan Chudasama

Design and Analysis of High Performance CMOS Temperature Sensor Using VCO

This paper presents a CMOS temperature sensor which is designed using self-bias differential voltage controlled ring oscillator at 180 nm TSMC CMOS technology to achieve low power. This paper focuses on design, simulation, and performance analysis of temperature sensor and its various components. In this used VCRO has full range voltage controllability along with a wide tuning range from 185 to 810 MHz, with free running frequency of 93 MHz. Power dissipation of voltage controlled ring oscillator at 1.8 V power supply is 438.91 µW. Different parameters like delay and power dissipation of individual blocks like CMOS temperature sensor component, voltage level shifter, counter and edge triggered D flip-flop are also calculated with respect to different power supply and threshold voltages. Power dissipation and delay of VCRO-based temperature sensor at 5 V power supply is 80.88 mW and 7.656 nS, respectively, and temperature range is from −175 to +165.

Kumkum Verma, Sanjay Kumar Jaiswal, K. K. Verma, Ronak Shirmal

High Performance Add Drop Filter Based on PCRR for ITU-T G.694.2 CWDM System

In this work, we demonstrated add drop filter using photonic circular ring resonator. The proposed design incorporated square lattice structure and silicon rods that are enclosed by air. Dropping efficiency and coupling efficiency are explored by utilizing 2D finite-difference time-domain (FDTD). The designed filter gives nearly 100 % of dropping efficiency at 1500 nm and coupling efficiency at 1471 nm with refractive index of 3.47. Photonic band gap of the proposed design has been assessed through plane wave expansion method (PWE).

Ekta Kumari, Pawan Kumar Inaniya

Efficient Data Dissemination in Wireless Sensor Network Using Adaptive and Dynamic Mobile Sink Based on Particle Swarm Optimization

Wireless sensor networks are an emergent technology for monitoring physical world. The energy constriction of Wireless sensor networks makes energy (resource) saving and elongating the network lifetime becomes the utmost imperative goalmouths of numerous routing conventions. Network segment division is a strategic technique used to prolong the lifetime of a sensor network by plummeting energy consumption. Subsequently sensor nodes adjacent to sink have to tolerate more traffic encumbrance to forward data, they will hastily diminish their energy which directs network partition and energy hole problem. Introducing mobility into sensor networks brings in new prospects to mend network performance in rapports of energy consumption, network lifetime, latency, throughput, etc. In this paper, a protocol is proposed for efficient data dissemination using mobile sink in wireless sensor networks. We used multiple mobile sink in order to gather the sensed data from predetermined paths and it moves in forward direction and returns back to the same position. The predetermined paths are calculated intelligently through particle swarm optimization (PSO) based on heterogeneity of energy (current resources with respect to average resources) and network density as an objective functions. In order to gather data from reference area, mobile sinks stay temporarily at some fixed point. The potency of our proposed algorithm will be in terms of network lifetime, number of dead sensors, energy consumption, end-to-end delay, network throughput, etc.

Nivedita Kumari, Neetu Sharma

A New Approach to Intuitionistic Fuzzy Soft Sets and Its Application in Decision-Making

Soft set theory (Comput Math Appl 44:1007–1083, 2002) is introduced recently as a model to handle uncertainty. Recently, characteristic functions for soft sets and hence operations on them using this approach were introduced in (Comput Math Appl 45:555–562, 2003). Following this approach, in this paper we redefine intuitionistic fuzzy soft sets (IFSS) and define operations on them. We also present an application of IFSS in decision-making which substantially improve and is more realistic than the algorithms proposed earlier by several authors.

B. K. Tripathy, R. K. Mohanty, T. R. Sooraj, K. R. Arun

Categorical Data Clustering Based on Cluster Ensemble Process

In spite of the fact that endeavors have been made to take care of the issue of clustering straight out information by means of group gatherings, with the outcomes being focused on customary calculations, it is observed that these procedures sadly create a last information parcel taking into account deficient data. The fundamental gathering data network exhibits just group information point relations, with numerous passages left obscure. Downright Data clustering and Cluster ensemble approach have been related and partitioned with examination in application areas with respect to related data. The fundamental aim of this paper is to examine and share information between these two data points and use this shared information for making novel clustering calculations for absolute information in light of the cross-preparation between the two subsequent item sets with exploratory analysis. All the more decisively, we normally characterize the Categorical Data Clustering (CDC) issue with improvement issue from the perspective of CE, and calculate with a CE approach for grouping clear-cut information.

D. Veeraiah, D. Vasumathi

Region-Based Clustering Approach for Energy Efficient Wireless Sensor Networks

Nowadays there is a huge increase in the use of sensors in various applications such as remote monitoring of environment, automobiles, disaster prone zones, home control, and military applications. The capabilities of Wireless Sensor Network (WSN) can be extended using self-organization to change their behavior dynamically and achieve network wide characteristics. Clustering techniques show the self-organized behavior of WSNs. Sensor nodes are grouped into disjoint, nonoverlapping subsets called clusters. Cluster Heads (CHs) collect data from the sensor nodes present in the cluster and forward it to the neighboring nodes using shortest path distance calculation and finally to the Base Station (BS). In the proposed clustering technique, network area is divided into regions. Cluster Head (CH) is elected by using the highest residual energy and the node degree. Cluster communication is at the most two-hop which results in less number of messages from member nodes to BS. Within a cluster, data is sent to CH by member nodes. After some time CH’s energy level goes below threshold value, then new CH is elected and clusters are reformed within each region. Our proposed clustering technique can be used in military applications to detect and gain information about enemy movements. Results are obtained by varying the number of nodes at different transmission ranges. The simulation results show that the proposed clustering algorithm reduces energy consumption, prolongs network lifetime, and achieves scalability.

Kalyani Wankhede, Sumedha Sirsikar

Analyzing Complexity Using a Proposed Approximation Algorithm MDA in Permutation Flow Shop Scheduling Environment

Sequencing and scheduling play a very important role in service industries, planning, and manufacturing system. Better sequencing and scheduling system have a significant impact in the marketplace, customer satisfaction, cost reduction, and productivity. Therefore, proficient sequencing directs to enhance in utilization effectiveness and therefore brings down the time required to complete the jobs. In this paper, we are making an attempt to bring down the time complexity in m-machine flow shop environment by reducing the sequences and to find an optimal or near optimal make-span. For effective analysis, a well-known heuristic algorithm that is, CDS is considered.

Megha Sharma, Rituraj Soni, Ajay Chaudhary, Vishal Goar

Empirical Evaluation of Threshold and Time Constraint Algorithm for Non-replicated Dynamic Data Allocation in Distributed Database Systems

Data allocation plays a significant role in the design of distributed database systems. Data transfer cost is a major cost of executing a query in a distributed database system. So the performance of distributed database systems is greatly dependent on allocation of data between the different sites of the network. The performance of static data allocation algorithms decreases as the retrieval and update access frequencies of queries from different sites to fragments changes. So, selecting a suitable method for allocation in the distributed database system is a key design issue. In this paper, the data allocation framework for non-replicated dynamic distributed database system using threshold and time constraint algorithm (TTCA) is developed and the performance of TTCA is evaluated against the threshold algorithm on the basis of total cost of reallocation and the number of migrations of fragments from one site to another site.

Arjan Singh

Automated Usability Evaluation of Web Applications

Evaluating the usability of web applications in terms of their learn-ability and navigability with the current design principles is a prospective area where improvement in the web designing principles should be revolutionised. The availability of any help system while navigating the web applications could be a progressive change towards providing better experience to the end users. Providing such system would be cost effective rather than suggesting the improvements in the design process which might not comply evenly with the web application developers across the globe. The research work presented here aims at providing such a help system to users which improves the user’s experience of the web application in terms of two factors: learn-ability and navigability. The proposed help system provides the suggestions in the form of links to be clicked next according to the user goal. The test-bed for the experiments conducted is Virtual Labs web application which is designed as a set of virtual laboratories aimed at users who are deprived of physical infrastructure for carrying out laboratory experiments in their institutes.

Sanchita Dixit, Vijaya Padmadas

An Efficient Approach for Frequent Pattern Mining Method Using Fuzzy Set Theory

In methods of data mining, association rule mining is well suited for applications such as decision making, catalog design, forecasting, etc. Existing association rule mining (ARM) methods are only applicable to a dataset consisting of binary attributes. But for the real-world application, traditional methods of ARM should be modified to handle numerical attributes as well. Fuzzy set concepts provide solution to some disadvantages of traditional ARM methods. Fuzzy membership function is providing convenient way to quantize numerical attribute of a dataset compared to other traditional discretization technique. Association rules generated by using fuzzy membership function with linguistic label have a better interpretability.

Manmay Badheka, Sagar Gajera

Virtual Machine Migration: A Green Computing Approach in Cloud Data Centers

A recent fast growing development and demand in high performance computing has brought IT technocrats on forefeet to devise energy aware mechanisms so that CO2 emission can be reduced to a great extent. The resources in cloud data centers are always over provisioned in order to meet the peak workload. These resources consume a huge amount of energy, if used in their full capacity. By dynamically adopting the green computing policies as per current workload, the energy consumption of cloud data center can be reduced. In the present study, VM migration process has been discussed and the simulation driven results for evaluation of the proposed heuristic on the basis of static upper and lower limits allowed for CPU utilization have been presented. The comparative analysis of resource utilization and power consumption of a data center, with and without migration policy, reveals that a significant amount of power consumption can be reduced by VM migration and utilization of resources can be optimized.

Minu Bala, Devanand

Detecting Myocardial Infarction by Multivariate Multiscale Covariance Analysis of Multilead Electrocardiograms

In this work, multiscale covariance analysis is proposed for multilead electrocardiogram signals to detect myocardial infarction (MI). Due to multiresolution decomposition, diagnostically important clinical components are grossly segmented at different scales. If multiscale multivariate matrices are formed using all ECG leads and subjected to covariance analysis at wavelet scales, covariances change from normal as MI evolves. This is due to the underlying pathology which is seen in few ECG leads. To capture the changes that occur during infarction, multiscale multivariate distortion metric is applied on covariance structures. To evaluate the proposed method, data sets are adopted from PTB diagnostic ECG database. This includes healthy control (HC), myocardial infarction in early stage (MIES), and acute myocardial infarction (AMI). The results show that the proposed method can detect the pathological MI subjects. For MI detection, the accuracy, the sensitivity, and the specificity is found to be 80, 76, and 84 %, respectively. The proposed method is simple and can be easily implemented for offline analysis for diagnosis of infarction using multiple leads.

L. N. Sharma, S. Dandapat

QoS Improvement in MANET Using Particle Swarm Optimization Algorithm

MANET is the type of communication network in which communicating device can move anywhere. Movability of devices plays a key role when we discussed about the performance of any routing protocol used in MANET. Performance of any MANET routing protocol depends on the values of its parameters. What will be the value of these parameters for which the performance of protocol is optimal? In this work the particle swarm optimization technique is used to select the optimal value of parameters of AODV routing protocol to improve the QoS in MANET. Java programming language is used to implement optimization technique and then output value is used as input into the network simulator tool NS2.35 for majoring the performance of AODV. The experimental results show 70.61 % drop in Average End-to-End delay (AE2ED), 34.06 % drop in Network Routing Load (NRL), and slight improvement (1.81 %) in Packet Delivery Ratio (PDR) using optimal combination of value of parameters.

Munesh Chandra Trivedi, Anupam Kumar Sharma

Comparing Various Classifier Techniques for Efficient Mining of Data

With recent advances in computer technology, large amounts of data could be collected and stored. But all this data becomes more useful when it is analyzed and some dependencies and correlations are detected. This can be accomplished with machine learning algorithms. WEKA (Waikato environment for knowledge analysis) is a collection of machine learning algorithms implemented in Java. WEKA consists of a large number of learning schemes for classification and regression numeric prediction. So, by using this we can find out the prediction value of dataset and the data which we stored can be seen in different forms in the form of matrix, graph, curve, tree, etc. In this paper, we are researching or comparing the results of the three classifiers, the classifiers we are using such as J48, Naïve Bayes, and preprocess the data. We compare the results which provide easy way to understand all the datasets and its condition.

Dheeraj Pal, Alok Jain, Aradhana Saxena, Vaibhav Agarwal

Fingerprint Recognition System by Termination Points Using Cascade-Forward Backpropagation Neural Network

Fingerprint authentication belongs to one of the oldest biometric systems. This paper defines a new approach for fingerprint recognition. In this paper only termination points of minutiae are used for authentication. This system matches only the fingerprint image with database image when there is 100 % match or more than 90 %. Finally, the neural network approach is applied for measurement of neural network performance. The false accept rate and false reject rate are also defined.

Annu Agarwal, Ajay Kumar Sharma, Sarika Khandelwal

On the Dynamic Maintenance of Spanning Tree

The paper presents a centralized heuristic algorithm for the secure and dynamic maintenance of spanning tree in wireless networks. Initially, we construct the minimum spanning tree that models the given network. Later, in order to reflect the topological dynamics in secure manner, we reorganize the minimum spanning tree. The resulting logical structure is a spanning tree; however, it may not be minimum spanning tree. Our findings have been substantiated with simulation results.

Isha Singh, Bharti Sharma, Awadhesh Kumar Singh

Internet of Things: Future Vision

The increase in the communication devices and their adoption by modern world gives rise to Internet of Things (IoT), which also covers sensors and actuators that are blended seamlessly with the environment around us. As IoT covers variety of enabling of various device technologies like sensors, communication devices, and smart phones, hence, next revolution will be transformation of present Internet to fully integrated future Internet. In this paper, we presented proposed architecture of cloud-based IoT.

Sushma Satpute, Bharat Singh Deora

Design and Analysis of Energy Efficient OPAMP for Rectifier in MicroScale Energy Harvesting (Solar Energy)

The physical process by which the energy is gathered from the surrounding environment is called energy harvesting. For several micro-scale electronic systems, electrical energy harvested from the sunlight proves to be an attractive and feasible solution. The paper presents the energy efficient design of OPAMP which is in conjunction with the one of the most important integral component of electrical interface known as rectifier, an AC–DC converter. A high gain OPAMP is proposed to harvest the maximum electrical energy obtained from the AC–DC energy converter, since the electrical signal obtained lies within few millivolts as a result it needs the amplification to provide startup voltage for the high end electronic applications. In this work a detailed analysis has been done between two designs of OPAMP one is high gain OPAMP design and other is design of OPAMP with current buffer compensation technique targeted to maximize the power extraction from the converters. Both the designs are implemented in 180 nm technology, whereas high gain design OPAMP operates at 1.2 V and another design of OPAMP operates at 1.8 V. The gain reported by the high gain OPAMP design is 95.41 dB with the power efficiency of 188 nW and current buffer compensation technique OPAMP is 90.71 dB with the power efficiency of 244 nW.

Vijendra K. Maurya, R. M. Mehra, Anu Mehra

Design and Analysis of Various Charge Pump Schemes to Yield Solar Energy Under Various Sunlight Intensities

A design of low-voltage, ultralow power, four-stage charge pump making utilization of dynamic charge transfer switch scheme targeted to minimize the loss of voltage due to threshold voltage drop and body effect implemented at circuit level implemented in 0.18 µ CMOS process proposed to harvest energy obtained from sunlight. The output stage pumping is tackled by the clocking technique of proposed charge pump instead of output stage configured with the diode. The proposed design is capable to boost solar voltage starting from 0.1 to 3.3 V with the maximum output reported by the simulation as 4.7 V. The sunlight intensity does not remain static it changes, so output voltage is analyzed under various sunlight intensities. From the result of simulation, it is reported that power consumption with the proposed design is 89 nW which is lower in comparison with the existing design. In this work, comparison is made between the various design schemes and analysis is done between the power consumption, frequency, pumping voltage, and delay.

Anurag Paliwal, R. M. Mehra, Anu Mehra

Performance Evaluation of Speech Synthesis Techniques for English Language

The conversion of text to synthetic production of speech is known as text-to-speech synthesis (TTS). This can be achieved by the method of concatenative speech synthesis (CSS) and hidden Markov model techniques. Quality is the important paradigm for the artificial speech produced. The study involves the comparative analysis for quality of speech synthesis using hidden Markov model and unit selection approach. The quality of synthesized speech is evaluated with the two methods, i.e., subjective measurement using mean opinion score and objective measurement based on mean square score and peak signal-to-noise ratio (PSNR). Mel-frequency cepstral coefficient features are also extracted for synthesized speech. The experimental analysis shows that unit selection method results in better synthesized voice than hidden Markov model.

Sangramsing N. Kayte, Monica Mundada, Santosh Gaikwad, Bharti Gawali

Systematization of Reliable Network Topologies Using Graph Operators

The aim of this paper is to study and compare the reliability of networks using Wiener index. The computer communication using electronic messaging has increased in recent years. The calculation of the overall reliability of the networks becomes an important problem. This paper presents the topology invariant which calculates the reliability of the newly constructed network using graph operations tensor product and Cartesian product in Topology theory. The simulated experimentation of the proposed topology invariant for the new topologies have been done and compared with existing topologies.

A. Joshi, V. Subedha

Network Performance Analysis of Startup Buffering for Live Streaming in P2P VOD Systems for Mesh-Based Topology

This paper explores mesh-based clustering for different start video streaming in P2P systems and estimates the performance of noncluster and clustered models. These models are based on mesh-based topology of P2P streaming consisting of peer join/leave. A new approach by way of “clustering” peers is proposed to tackle P2P VOD streaming. The proposed models were simulated and verified using OMNET++ V.4. A clustered model for video streaming is proposed and simulated to consider the performance of network under startup buffering for frame loss, startup delay, and end-to-end delay parameters. The results obtained from simulations are compared for both noncluster versus cluster models. The results show the impact of startup buffering on both models is also bounded due to time limits of release buffer and playing buffer under the proposed models, which causes reduction in wait time to view video improving the overall VOD system performance. The proposed model is also able to provide missing parts (of video) to late viewers, which gives the facilities of both live and stored streaming from user’s point of view, therefore it serves to be functionally hybrid and is most useful.

Nemi Chand Barwar, Bhadada Rajesh

Cryptanalysis of Image Encryption Algorithms Based on Pixels Shuffling and Bits Shuffling

In this paper, cryptanalysis of a certain class of shuffling-based image encryption algorithms is presented. The class includes the image encryption algorithms which either use a combination of pixels and bits level shuffling or bits level shuffling only. The proposed chosen plaintext attack on such algorithms can recover the plaintext image without having any clue or secret key. To demonstrate the practicability and effectiveness of the attack, it is applied to break one such algorithm recently suggested by Huang et al. in [Telecommun Syst 52(2), 563–571, 2013]. The Huang et al. image encryption uses a combination of pixel-level shuffling followed by bit-level shuffling. The simulation of the attack justifies the feasibility of the proposed cryptanalysis. The attack can be easily extended to break other such image encryption algorithms as well. Further, some security issues related to these algorithms are discussed and few preferments are propounded which can make the class of encryption algorithms more robust against the proposed attack.

Pankaj Kumar Sharma, Aditya Kumar, Musheer Ahmad

Circular-Shape Slotted Microstrip Antenna

In this paper, a novel single-feed circular-shape slotted microstrip antenna is proposed for C-band (4–8 GHz) applications. Antenna is designed on FR4 glass epoxy material with dielectric constant 4.4 and height 1.56 mm. Antenna is fed by a coaxial probe feed. Ansoft-based HFSS software is used for the simulation of proposed antenna. Vector Network Analyzer (VNA) is used for measuring the Return Loss (RL) and Voltage Standing Wave Ratio (VSWR) of the proposed antenna. The measured results such as gain, bandwidth, and return loss confirm the validity of this design and show a good agreement with simulated results. The proposed antenna shows a bandwidth of 191 MHz at 4.34 GHz, 190 MHz at 5.9 GHz and 106 MHz at 6.51 GHz frequency.

Sumanpreet Kaur Sidhu, Jagtar Singh Sivia

Energy Efficient Data Aggregation Technique Using Load Shifting Policy for Wireless Sensor Network

Data redundancy is quite common in wireless sensor networks (WSNs) where nodes are deployed densely. The reason behind such deployment is to achieve reliability from communication failure. Communication failure happens when particular node transmitting data fails. In WSN there is no other way than keeping redundant nodes to solve communication failure problem. If redundant nodes are available then at the time of node failure the data of failed node can be recovered from its redundant nodes. Though we can achieve reliability through redundant nodes presence of redundant nodes will generate more number of redundant packets which will consume more energy of network. Because nodes which are densely deployed will sense the same information and send it to sink node and sink will waste its energy in processing redundant data and also redundancy will generate heavy traffic in network. Hence, there is a need to trade-off between energy conservation and reliability. To do this trade-off we need to find optimization point of redundancy in WSN. So that reliability and energy conservation both will be maintained. In this paper we have used clustering-based load shifting policy (LSP) to eliminate redundancy up to an adequate level to achieve optimization point of redundancy. Data aggregation eliminates redundancy from WSN. We are performing data aggregation at two levels and at the same time we are keeping redundant nodes up to 50 % to achieve reliability. In this paper, we have done comparison of traditional cluster-based data aggregation with our LSP-based data aggregation. Simulation result shows that LSPDA has lesser average energy consumption and longer lifetime than traditional cluster-based data aggregation method.

Samarth Anavatti, Sumedha Sirsikar, Manoj Chandak

Retrieving Instructional Video Content from Speech and Text Information

The interest of today’s generation to learn from video lectures is becoming popular due to its considerable advantages and easy availability than classroom learning. To involve into this, many institutes and organizations are using this method for teaching and learning. An enormous amount of data is generated in video lecturing form. To extract the desired information from the desired video from this vast video information available on internet becomes difficult. In this paper, we have used techniques for automatically retrieving the information from video files to collect it as a metadata for those files. For efficient retrieval of text from videos we use the OCR (Optical Character Recognition) tool to extract text from slides and ASR (Automatic Speech Recognition) tool for recognizing information from speech given by the speaker. First, we do segmentation and classification of video frames for identifying the key frames. Then the OCR and ASR tool is used for extracting the information from video slides and audio speech respectively. The collected data can be stored as a metadata for the file. Finally, the search can be made more efficient by applying clustering and ontology concept.

Ashwini Y. Kothawade, Dipak R. Patil

Performance Improvement of SLM-Based MC-CDMA System using MIMO Technique

This paper presents performance of SeLective Mapping (SLM)-based Multicarrier Code Division Multiple Access (MC-CDMA) system with MIMO technique. MC-CDMA fulfills the requirement of the forthcoming generation of wireless communication and offers high data rate with additional benefit of low Inter Symbol Interference (ISI). However, high Peak to Average Power Ratio (PAPR) is the major limitation of the MC-CDMA system. This high PAPR reduces efficiency of High Power Amplifier (HPA) and radio frequency components at the base station. To reduce PAPR, SLM is a simple approach, in which input data sequences are multiplied by Mp phase sequences before multicarrier modulation, and then the one with the minimal value of PAPR is chosen for transmission. In this work MIMO technique is used with SLM-based MC-CDMA downlink transmission system, which not only reduces PAPR but improves the BER performance. The results show that the PAPR of the proposed scheme is lower than the SLM-based MC-CDMA system. In addition, BER performance of proposed scheme outperforms SLM-based MC-CDMA system. A comparative study of PAPR and BER performance shows that low PAPR can be achieved by the proposed scheme with improved BER.

Madhvi Jangalwa, Vrinda Tokekar

SNR Improvement for Evoked Potential Estimation Using Wavelet Transform Averaging Technique

Evoked potential (EP) comprises giving stimulus to the subject and record the response of the brain. Here, background noise electroencephalogram (EEG) is to be removed to see the response for the stimulus being given. In this paper, wavelet transform has been used to extract the responses and also to improve the signal to noise ratio (SNR). Wavelet transform averaging technique of estimation improves the SNR by a large amount in almost many sweeps of EP. The two different wavelet transforms such as Daubechies wavelet transform and Biorthogonal wavelet transform have been used to improve the SNR. SNR comparison is made with the conventional ensemble averaging technique. In this paper, Visual Evoked Potential (VEP) signals have been considered for analysis.

M. L. Shailesh, Anand Jatti

A GIS and Agent-Based Model to Simulate Fire Emergency Response

In this paper a computing model for Fire Emergency Response is discussed. The Fire Emergency Response (FER) is considered here as a complex heterogeneous system. The FER system is composed of Fire Incident, Fire Station, Fire Emergency Vehicle, and Road Network components. Agent-based Modeling (ABM) is used to model the properties and behavior of each of the components. To incorporate the spatial properties and spatial operations in relation to modeling the behavior of components Geographical Information System (GIS) is used. The simulation of the model will provide the results of dynamical behavior of components through interaction with other components. The FER system model is implemented in GAMA 1.5.1 a GIS and agent modeling platform for simulation.

Mainak Bandyopadhyay, Varun Singh

Analyzing Link Stability and Throughput in Ad Hoc Network for Ricean Channel by Varying Pause Time

The growth and usage of wireless communication networks in the last decade is quite significant and when compared with other new technologies, it is vast. The cutting edge of a wireless network is the capability of a wireless node to remain in contact with the whole world while remain mobile. However, the continuous movement of nodes in Ad hoc networks also affects the performance of the network. This paper studies the effects of pause time in the link connectivity and throughput of mobile nodes in the Ricean fading channel. In this work pause time and Ricean constant (k) have been varied to study their effects on performance of mobile nodes connectivity. It has been observed that the throughput and link duration increases with increase in both pause time as well as Ricean constant.

Bineet Kumar Joshi, Bansidhar Joshi

A Review on Pixel-Based Binarization of Gray Images

This paper presents a review study on binarization of gray images. Binarization is a technique by which an image is converted into bits. It is an important step in most document image analysis systems. Since a digital image is a set of pixels. Many binarization techniques have a definite intensity value for each pixel. A gray image is just an image which has each pixel of same intensity. That means there is not much difference in color or value information of pixels. Usually, a picture in black and white is considered as gray image in which black has least intensity and white have highest.

Ankit Shrivastava, Devesh Kumar Srivastava

Mobility- and Energy-Conscious Clustering Protocol for Wireless Networks

In this paper, we present a distributed clustering protocol for mobile wireless sensor networks. A large majority of research in clustering and routing algorithms for WSNs assume a static network and hence are rendered inefficient in cases of highly mobile sensor networks, which is an aspect addressed here. MECP is an energy-efficient, mobility-aware protocol and utilizes information about the movement of sensor nodes and residual energy as attributes in network formation. It also provides a mechanism for fault tolerance to decrease packet data loss in case of cluster head failures.

Abhinav Singh, Awadhesh Kumar Singh

RE2R—Reliable Energy Efficient Routing for UWSNs

Wireless sensor networks are used in each and every area of our daily life. It helps us in our life in a straightforward and cost-effective manner. WSNs are usually deployed at unreachable places to collect information about surroundings. As we know three-fourths of the earth is covered with water. So, for information underwater, we need underwater sensor networks. Compared to earthy networks underwater, sensor networks have several limitations of low bandwidth, high propagation delay, and low transmission power. Somewhat similar to earthy sensor nodes, a crucial problem with UWSNs is finding an efficient route between a source and a destination. We reviewed several routing algorithms based on location and locationless algorithms and protocols and we came to know of the several challenges existing in designing energy efficient routing protocols. Considering the several challenges overall, we tried to find a reliable and energy efficient routing protocol for under water sensor networks. Through simulation study using NS-2 simulator, we showed a significant improvement in terms of data delivery ratio, node dead ratio, and energy consumption.

Khan Gulista, Gola Kamal Kumar, Rathore Rahul

Mitigation Techniques for Gray Hole and Black Hole Attacks in Wireless Mesh Network

Wireless mesh network (WMN) is considered as a key technology in today’s networking era. In this, security is determined as a vital constraint. Several approaches have been proposed to provide secure communication in WMN but communication security possibilities always exist and are very hard to maintain. Black hole and gray hole are the two major attacks at the network layer in WMN. RAODV, IDSAODV, and RIDAODV are some security approaches against these attacks. These approaches can immune the communication at a certain rate. We have proposed a cache-based secure AODV routing protocol, i.e., SDAODV in which instead of using the RREQ messages, security is provided by using the last sequence number of each packet. Using this approach, we have improved the network throughput at a certain rate. The performance of the proposed approach is evaluated by showing the throughput graph.

Geetanjali Rathee, Hemraj Saini

Investigation for Land Use and Land Cover Change Detection Using GIS

For better development and good space planning, it is required to understand the recent changes that have occurred in the surrounding environment. The Change Detection method is usually deployed for this. The underlying research study did the same for Nanded city. The criterion used was to understand land use and land cover change variables as a token of change detection. The results helped us to understand urban spreading out that took place in the time zones 1973–1992 and 1992–2014. Our approach was using Remote Sensing Technology. Landsat images as MSS, TM, and LC were collected and their analysis was carried out. This paper can work as a role model for GIS-based LU/LC change detection of any city in India.

Prakash Shivpuje, Nilesh Deshmukh, Parag Bhalchandra, Santosh Khamitkar, Sakharam Lokhande, Vijay Jondhale, Vijay Bahuguna

Effectiveness of Computer-Based Instructional Visualization and Instructional Strategies on e-Learning Environment

Learning with visualization is a new trend for the teaching and learning environment. However, in this study the question is do all types of visualization and strategies equally affect achieving various learning objectives? How computer generated questions with and without feedback strategies affect achievement of learning objective? To investigate the effectiveness of different types of visualization and strategies, researchers developed three different types of instructional modules (static, animated and interactive) and two types of instructional strategies (question and feedback). A total of 540 students were selected to conduct the study with specific matching criteria. MANOVA was done to find out group differences in different conditions. The results showed a momentous mean difference in different conditions i.e., in interactive visualization condition students performed better than animated and static condition; besides, question and feedback conditions were more effective than no strategies and only question conditions with respect to various learning outcome. The result is discussed critically from several theoretical focal points.

Sanju Saha, Santoshi Halder

Test Case Reduction Using Decision Table for Requirements Specifications

Majority of the software development cost is incurred in software testing. Most often, the testing of the software is carried out after the code has been prepared and the test cases are obtained from the code. This approach may work well but shall not guarantee that all the requirements are incorporated in the code and that each of the critical paths has been tested. In early phases of software development, decision table is used for test case generation for functional testing. This paper proposed a technique for automated test case optimization generated through decision table. In this paper a framework for test case generation from decision table generated form SRS, and an algorithm for decision table optimization, has been proposed.

Avinash Gupta, Anshu Gupta, Dharmender Singh Kushwaha

Resource Management Using ANN-PSO Techniques in Cloud Environment

In the cloud environment, multiple requests are coming from the client on the datacenters. We have to assign the resources to all the requests. In this paper, the main objective is to find out suitable mapping between requests and resources. To do this we are using the artificial neural network (ANN) with the PSO algorithm. In this algorithm input layer (client) sends request with some requirement. According to requirements we calculate the resources cost on the behalf of the three clusters namely high, medium, and low. Since ANN supports the parallel processing, so we can process all the requests whether they belong to high, medium, and low, hence we optimize the processing time and cost also. PSO algorithm works on the hidden layer as a scheduler. Since particle swarm ptimization (PSO) algorithm supports fast convergence and time constraints, etc. Therefore both techniques minimize the cost and increase the availability and the reliability as well as results show improved performance.

Narander Kumar, Pooja Patel

Congestion Control in Heterogeneous Wireless Sensor Networks for High-Quality Data Transmission

Heterogeneous wireless sensor network (HTWSN) is the most preferable and demanding technology for military applications because of low cost and high performance in terms of high-quality data transmission with low end-to-end delay. HTWSN can be established with variable-configured sensor nodes for detection and monitoring the complex and multitasking events efficiently within the network. But congestion is the most serious issue which may cause high packet loss; increase the number of retransmissions, frequent link failure, and node failure; lower the network life time by increasing the energy consumption; and lower the throughput. Most of existing congestion control protocols are developed for homogeneous wireless sensor network which may not help to achieve high throughput for HTWSN. So we have proposed a new congestion control protocol (CCP) for HTWSNs which can estimate the congestion control degree (CCD) at each node prior to identify the future congested nodes in the network. Accordingly, CCP can enable its load balancing technique effectively and balance the data traffic between the future congested nodes and source node to achieve high-quality data transmission in HTWSN.

Kakelli Anil Kumar, Addepalli V. N. Krishna, K. Shahu Chatrapati

Robust Data Model for Enhanced Anomaly Detection

As the volume of network usage increases, inexorably, the proportions of threats are also increasing. Various approaches to anomaly detection are currently being in use with each one has its own merits and demerits. Anomaly detection is the process of analyzing the users data either normal or anomaly, most of the records are normal records only. When analyzing these imbalanced types of datasets with machine learning algorithms the performance degradation is high and cannot predict the class label accurately. In this paper, we proposed a hybrid approach to address these problems. Here we combine the class balancing and rough set theory (RST). This approach enhances the anomaly detection rate and empirical results show that considerable performance improvements.

R. Ravinder Reddy, Y. Ramadevi, K. V. N. Sunitha

Database Retrieval-Based Digital Watermarking for Educational Institutions

Multimedia broadcast monitoring is one of the major challenges and has drawn the attention of several researchers. Database retrieval technique is an application used to provide security for the sensitive images. Image authentication is provided by adding a watermark of the given string to the sensitive images. The security for image is provided by encoding the given image with the key which is provided by the user and is stored in the database as a numeric data. User is provided with a unique id which is sent to the receiver. Receiver uses this unique id for retrieving the image. Digital image watermarking is one among the several methods that hides information. Two factors are considered during watermarking, imperceptibility, and robustness. Measure of imperceptibility is PSNR and robustness is NC. Genetic algorithm is employed in the proposed watermarking algorithm.

T. Sridevi, S. Sameen Fatima

Conceptual Modeling Through Fuzzy Logic for Spatial Database

Fuzzy logic shows a degree of vagueness or uncertainty that is expressed with crisp spatial objects. It supports spatial database objects which can precisely determine shape and boundaries. The uses of fuzzy logic with spatial database for conceptual modeling handles uncertain data types in fuzzy object-oriented database. Fuzzy approaches have been extensively applied for modeling different databases to explicitly represent and manipulate the imprecise and uncertain data precisely. In this paper, a new conceptual modeling approach has been developed and studied on the case of traffic modeling and route identification problem.

Narander Kumar, Ram Singar Verma, Jitendra Kurmi

Performance Analysis of Clustering Algorithm in Sensing Microblog for Smart Cities

Smart city is an aspiration of the various stakeholders of the city. We strongly believe that social media can be one of the real-time data sources, which help stakeholder to realize this dream. In this paper, we have analyzed the real-time data provided by Twitter in order to empower citizens by keeping them updated about what is happening around the city. We have implemented various clustering algorithms like k-means, Hierarchical agglomerative, LDA topic modeling on Twitter stream and reported results with purity 0.476, normal mutual information (NMI) 0.3835, and F-measure 0.54. We conclude that HA-ward outperforms K-means and LDA substantially. We also conclude that results are not impressive and need to design separate feature based clustering algorithm. We have identified various tasks to mine microblog in the ambit of smart city such as event detection, geo-tagging, city clustering based upon the user activity on ground.

Sandip Modha, Khushbu Joshi

Offloading for Application Optimization Using Mobile Cloud Computing

Mobile applications are increasingly becoming ubiquitous which provides rich functionalities and services to mobile users. But despite having some technological advancements, smart mobile devices (SMDs) are still minimum potential computing devices enforced by battery life, storage capacity, and network bandwidth which obstructs the possible execution of computational intensive applications. Thus, mobile cloud computing is employed to optimize the computationally intensive applications which use computation offloading techniques for increasing SMD’s capabilities. The aim of this paper is to emphasize the specific issues related to mobile cloud computing, offloading, and thus concluding the paper by analyzing the challenges that have not been met perfectly and charts a roadmap towards this direction.

Chinu Singla, Sakshi Kaushal

Integration of Color and MDLEP as a Feature Vector in Image Indexing and Retrieval System

In the process of image retrieval, more information can be extracted by combining two or more features. Feature vectors based on local patterns are very popular in deriving the local information present in an image. Majority of these methods is mainly based on encoding the variation in gray scale values of center pixel and its neighboring elements. The center pixel is assigned a value which is reflected in a histogram. LBP operator became the first of its kind where the intensity value of center pixel is treated as threshold to capture the information by comparing with other neighbors. However, the information in directions is not explored in the method. The DLEPs are proposed to code the edge information mainly in four directions. The performance of directional local extrema patterns can be improved by taking the magnitude into consideration. In this paper, we propose a new feature vector for an image retrieval system by combing color and MDLEP. The results showed a significant improvement in terms of precision and recall.

L. Koteswara Rao, D Venakata Rao, P. Rohini

Reversible Data Hiding Through Hamming Code Using Dual Image

In this paper, we propose a new dual-image based reversible data hiding scheme through Hamming code (RDHHC) using shared secret keys. We collect a block of seven pixels from cover image and copy it into two arrays. We then adjust redundant LSB bits using odd parity such that any error creation encountered are recovered at the receiver end. Before data embedding, we first complement bit at the position of shared secret key of the block. After that we embed secret message bit by error creation in any position of the block except key position. The receiver detects and corrects the error using Hamming error correcting code. Two shared secret keys κ and ξ help us to perform these data hiding and recovery operations. We distribute two stego pixel blocks among dual image based on the bit pattern of shared secret key ξ of length 128 bits. Finally, we compare our scheme with other state-of-the-art methods and obtain reasonably better performance in terms of security and quality.

Biswapati Jana, Debasis Giri, Shyamal Kumar Mondal

A Hybrid Fault Tolerant Scheduler for Computational Grid Environment

A computational grid is an environment for achieving better performance and throughput by pooling geographically distributed heterogeneous resources dynamically depending on their availability, capability, performance, and cost and user quality of self-service requirement. Fault tolerant grid scheduling is a significant concern for computational grid systems. The handling of failures can happen either before or after scheduling tasks on grid resources. Generally, there are two approaches used for the handling of failures namely, post-active fault-tolerant approach and pro-active fault tolerant approach. Recently, a fault tolerant scheduler for grids proposed uses pro-active approach in selecting resources by computing scheduling indicator. However, this study did not considered failure of node while the task is being executed. Thus in our study, we incorporates post-active fault tolerant approach to the exiting study, i.e., migrating of task to another node in the event of failure of node while the task is being executed. We constructed a hybrid fault tolerant grid scheduler using GridSim 4.2 toolkit. We demonstrated that our proposed fault tolerant scheduler shows better results in terms of success rate in comparison with the existing fault tolerant scheduler.

Ram Mohan Rao Kovvur, S. Ramachandram

Image Transmission in OSTBC MIMO-PLC Over Nakagami-m Distributed Background Noise

In this paper MATLAB tool is used to simulate a MIMO-PLC system model using orthogonal space-time block codes (OSTBC) under the presence of additive background noise. The additive background noise is modeled using Nakagami–m distribution. The MIMO-PLC system model is tested with an input grayscale baboon image and BER curve is plotted for the MIMO-PLC system. Furthermore a closed form expression for the average BER of MIMO-PLC is derived theoretically by using moment generating function (MGF) based approach. The MATLAB simulated BER results are then compared with the theoretically derived results obtained from MGF based approach. Using MGF based approach a mathematical method is also presented to calculate the outage probability of PLC-MIMO for Nakagami-m distributed background noise.

Ruchi Negi, Kanchan Sharma

Optimized Relay Node Based Energy-Efficient MAC Protocol for a Wireless Sensor Network

Few of the most important quality of service (QOS) requirements for wireless sensor networks (WSNs) are bounded delay and energy efficiency. This paper presents a new Improved Energy-Efficient Distributed Receiver Based Cooperative Medium Access Control Protocol (Improved E2DRCMAC) for WSNs which employs cooperative communication for delay sensitive and energy-efficient applications. We first recognized the limitations of cooperative communication which includes additional processing and reception steps at each node which increases energy and time delay with which the packets reach the destination. To address this issue, multi-hop cooperative communication scheme is proposed, which makes use of genetic algorithm (GA) to optimize relay nodes for delay minimization and energy savings. The Improved E2DRCMAC outperforms E2DRCMAC in terms of 70 % decrease in energy consumption, 45 % less end-to-end delay at higher data rates, and close to 100 % packet delivery ratio as node density starts increasing.

Kriti Ohri, C. Rama Krishna

Reliable and Prioritized Data Transmission Protocol for Wireless Sensor Networks

Wireless sensor network (WSNs) has potential to solve the problem automatically with less or no human intervention in most of the cases. Time constraint event (TCE) and non-time constraint (NTCE) event are two broad categories in heterogeneous WSNs. To address the QoS parameters in TCE and nTCE type of applications are crucial job in resource constraint multi-event WSNs. The queue scheduler performs those set of actions which are necessary to achieve the desired QoS parameters for each distinct event separately in a sensor network. A study mainly focuses on priority-based reliability with considering their different levels of priority of traffic. Decisions are taken at various intermediate nodes by managing queue. For each level of priority a separate queue is maintained. Queue management is the heart of proposed priority architecture for achieving quality of services. Due to proper queuing operation, scheduler takes the right decision at right time to attain the required reliability. Scheduler accomplishes desired goals of every application by handling their priority and deadlines.

Sambhaji Sarode, Jagdish Bakal, L. G. Malik

Caching: QoS Enabled Metadata Processing Scheme for Data Deduplication

Increase in digital data demands a smart storage technique which provides quick storage and faster recovery of the stored data. This voluminous data needs an intelligent data science tools and methods to struggle for the space required for efficiently storing the data. Deduplication is one the emerging and widely used techniques these days for reducing the data size and then transferring it over the network thus reducing the network bandwidth. Main aim of Deduplication is to reduce storage space by allowing only unique data. In this paper, we present the review of existing Deduplication solutions and propose enhanced cache mechanism approach for efficient Deduplication over distributed system.

Jyoti Malhotra, Jagdish Bakal, L. G. Malik

Design and Analysis of Grid Connected Wind/PV Hybrid System

In this article, the design and analysis of wind/PV hybrid system which is grid connected has been presented. The wind and photovoltaic sources are integrated at the DC bus and its voltage is stepped up to desired value using DC–DC boost converter. The proportional integral controller has been used to control the output power produced from wind and photovoltaic resources to achieve desired output response. A variable speed control method is implemented for permanent magnet synchronous wind generator, which is capable of extracting maximum power even if it is operated at wind speed which is lower than the rated speed. The power produced from wind/PV hybrid system is supplied to the load and in the case of excess power generation it is supplied to the grid. The proposed hybrid system is modeled in MATLAB/Simulink. The performance of the system is evaluated by not only considering the changes in wind speed and solar irradiance but also by load power variations. The simulation results show that the proposed hybrid wind/PV system is a viable option for microgrid applications.

K. Shivarama Krishna, K. Sathish Kumar, J. Belwin Edward, M. Balachandar

Requirements Prioritization: Survey and Analysis

Survey on requirements prioritization to identify the practices related to requirements prioritization among the software development organizations and to understand association of requirements prioritization’ effects on software deliveries and resources is designed and conducted for projects/products in different domains across organizations. The results are analyzed for identifying areas that needed attention in terms of requirements prioritization. The survey and analysis enable understanding the need for requirements prioritization for stable and smooth release cycles. A multi-level framework utilizing the concepts of ABC analysis is suggested as a method for prioritization, for predictable and stable releases.

Sita Devulapalli, Akhil Khare, O. R. S. Rao

GPU Acceleration of MoM for Computation of Performance Parameters of Strip Dipole Antenna

This paper presents the use of general purpose graphics processing units (GPUs) computing to accelerate impedance matrix assembly phase of one of the popular computational electromagnetic method, namely method of moments (MoM). MoM is widely used computational electromagnetic (CEM) technique for solving electromagnetic problems governed by an electric field integral equation (EFIE), and ideally suited for radiation and scattering problems. To validate accuracy, radiation analysis of strip dipole antenna using standard Rao-Wilton-Glisson (RWG) basis and weighting functions which is a good trade off between accuracy and complexity, is considered for the serial and parallel implementations. Performance parameters of strip dipole antenna are computed as postprocessing part of the simulation process.

Hemlata Soni, Pushtivardhan Soni, Pradeep Chhawcharia

Energy Saving Model Techniques in Wireless Ad Hoc Network

A remote impromptu system is gathering of portable hubs that make a multi-jump self-governing framework with no altered foundation. The hubs utilization administration of different hubs in the system to transmit parcel to the destination hubs. Cell phone is battery worked and this is the constrained asset. So the vitality preservation is the basic issue in the system. There are numerous methodologies recommended for vitality protection. We have proposed two vitality effective procedures to decrease vitality utilization at convention level. In first strategies vitality preservation done by decreasing number of course demand message while in the second methods vitality protection done by force control systems.

Ajaykumar Tarunkumar Shah, Shrikant H. Patel

Weighted-PCA Based Multimodal Medical Image Fusion in Contourlet Domain

Multimodal medical image fusion is used to fuse the complementary features from diverse modalities and abandon the superfluous information. The fusion of structural medical images-computed tomography (CT) and magnetic resonance imaging (MRI) scans provides to deliver an extensive fused image consisting of obligatory anatomical minutiae to improve medical diagnosis. This paper presents a weighted principal component analysis (PCA) based approach for multimodal fusion in Contourlet domain. The sole aim of using Contourlet transform is because of its adeptness to capture visual geometrical structures and anisotropy. Further, weighted PCA assists in reducing the dimensionality of the source images as well as helps in better selection of principal components. Maximum and minimum fusion rules are then applied to fuse the decomposed coefficients. Image quality assessment (IQA) is carried out using standard fusion metrics quantitatively to assess the fused image both in terms of information content as well as quality of reconstruction. Simulation results with the proposed fusion method depict an effective fusion response in comparison to other state-of-art approaches.

Aisha Moin, Vikrant Bhateja, Anuja Srivastava

Design Analysis of an n-Bit LFSR-Based Generic Stream Cipher and Its Implementation Discussion on Hardware and Software Platforms

Pseudorandom numbers are at the core of any network security application. Also, security of satellite phones and cellular phones depends heavily on the pseudorandom numbers generated. In the network security domain, its use is particularly in key generation, re-keying, authentication, smart-phone security, etc. Also, current research shows that satellite-based telephony system, having GMR-1 and GMR-2 algorithms for secret key generation is prone to attacks. The algorithm A5/1 used in GSM technology is also cryptographically poor. Hence generation of strong sets of pseudorandom number is needed. These random numbers are produced through a pseudorandom number generator (PRNG). This generator in general terms is called a Cipher. Hence, if there is a flaw or the PRNG produces predictable sets of random numbers, then the entire application would be prone to attacks. Therefore, development of a generic framework for generating strong sets of pseudorandom numbers is proposed. The proposal aims to build an in-general framework and a unified model for enhanced security specifically for LFSR-based stream ciphers. The proposed generic model uses results from the above case study. For the hardware deployment, Spartan-6 FPGA toolkit is used and for the software part a parallel computing platform namely CUDA is used. The model is aimed at development of a framework which generates strong sets of pseudorandom numbers for its use in various network security, satellite and cellular applications.

Trishla Shah, Darshana Upadhyay

An Effective Strategy for Fingerprint Recognition Based on pRAM’s Neural Nature with Data Input Mappings

At present, information systems are highly susceptible to unauthorized human access. Various techniques have been introduced to secure these information systems. One of the most popular techniques for verifying the user for gaining access to information systems is fingerprint recognition. The popularity of fingerprint biometric is due to its invariant behavior over age and time. A novel and accurate fingerprint identification technique is presented in this paper based on neural nature of a pRAM. The proposed method uses a methodology incorporating the use of data mappings and reinforcement learning in order to maximize the efficiency and accuracy in identifying the scanned user prints. Since, the world is moving in the era of “Internet of things (IoT),” these biometric techniques are integral to the future information securing framework. pRAM-based network is a recently introduced technique employed in pattern recognition and is different from other classical neural network models reason being that a pRAM networks gets trained in relatively less time and can be implemented in minimal hardware setup. Here the application of the permuted mapping is derived using the proposed data-based input mapping with a bit plane-encoding scheme to cover multi-gray level images. Furthermore, binarization is also done using eight binary planes and a high-resolution image is processed by dividing it into sub-images so that it can be handled by several networks in parallel. The current recognition procedure has been applied on realistic fingerprint scans/images and the results drawn have shown significant improvements. The present results drawn here prove that a pRAM structure can provide highly reliable results by introducing the permuted mapping scheme for efficient identification.

Saleh A. Alghamdi

Various e-Governance Applications, Computing Architecture and Implementation Barriers

To speed up the digital India mission, e-Governance applications and policies must be effectively designed and implemented. In this paper, various applications have been proposed that can be implemented using e-Governance to provide efficient online services to the citizens. We have also proposed the infrastructural requirement to implement these applications. Various barriers to the implementations have also been discussed.

Anand More, Priyesh Kanungo

Fuzzy Logic-Based Expert System for Assessment of Bank Loan Applications in Namibia

In the present scenario, the majority of people in Namibia do not have enough capital to build houses, start-up businesses so they turn to banks to get a loan. In today’s era, banks loans are considered to be one of the highest market risk phenomenon. Unsecured bank loans that are given without any collateral are a major stumbling in today’s world, which can be seen from the economic recession that occurred across the world. The aim of this review paper is to present the exhaustive review and introduce a fuzzy logic-based automated artificial system, replacing the existing manual system, which will help in decision-making for disbursement of the bank loan and give the reason for the inferred decision.

Dharm Singh Jat, Axel Jerome Xoagub

Backmatter

Weitere Informationen