Skip to main content
main-content

Über dieses Buch

The present volume is a compilation of research work in computation, communication, vision sciences, device design, fabrication, upcoming materials and related process design, etc. It is derived out of selected manuscripts submitted to the 2014 National Workshop on Advances in Communication and Computing (WACC 2014), Assam Engineering College, Guwahati, Assam, India which is emerging out to be a premier platform for discussion and dissemination of knowhow in this part of the world. The papers included in the volume are indicative of the recent thrust in computation, communications and emerging technologies. Certain recent advances in ZnO nanostructures for alternate energy generation provide emerging insights into an area that has promises for the energy sector including conservation and green technology. Similarly, scholarly contributions have focused on malware detection and related issues. Several contributions have focused on biomedical aspects including contributions related to cancer detection using active learning, application of clinical information in MECG using sample and channel convolution matrices for better diagnostic decision, etc. Some other works have focused on the DCT-domain linear regression of ECG signals, SVD Analysis on reduced 3-lead ECG data, the quantification of diagnostic information on ECG signal, a compressed sensing approach with application in MRI, learning aided image de-noising for medical applications, etc. Some works have dealt with application of audio fingerprinting for multi-lingual Indian song retrieval, semi-automatic approach to segmentation and the marking of pitch contours for prosodic analysis, semiautomatic syllable labeling for Assamese language, stressed speech recognition, handwriting recognition in Assamese script, speaker verification considering the effect of session variability and the block matching for motion estimation, etc. The primary objective of the present volume is to prepare a document for dissemination of and discussion on emerging areas of research in computation and communication as aimed by WACC 2014. We hope that the volume will serve as a reference book for researchers in these areas.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter

Chapter 1. A Discrete Event System Based Approach for Obfuscated Malware Detection

With the growing use and popularity of Internet among people, security threats such as viruses, worms etc., are also rapidly increasing. In order to detect and prevent such threats, many antivirus softwares have been created. Signature matching approach used to detect malwares can be easily thwarted by using code obfuscation techniques. In this paper, we propose a discrete event systems-based approach to detect obfuscated malwares in a system, taking Bagle. A as our test virus. Commonly used obfuscation techniques have been applied to bagle. We built DES models for a process under attack and normal conditions with system calls as events. Based on the system calls evoked by any process, our detector will determine its maliciousness by comparing it with both the models.
Chinmaya K. Patanaik, Ferdous A. Barbhuiya, Santosh Biswas, Sukumar Nandi

Chapter 2. Quantification of Diagnostic Information from Electrocardiogram Signal: A Review

Electrocardiogram (ECG) contains the information about the contraction and relaxation of heart chambers. This diagnostic information will change due to various cardiovascular diseases. This information is used by a cardiologist for accurate detection of various life-threatening cardiac disorders. ECG signals are subjected to number of processing, for computer aided detection and localization of cardiovascular diseases. These processing schemes are categorized as filtering, synthesis, compression and transmission. Quantifying diagnostic information from an ECG signal in an efficient way, is always a challenging task in the area of signal processing. This paper presents a review on state-of-art diagnostic information extraction approaches and their applications in various ECG signal processing schemes such as quality assessment and cardiac disease detection. Then, a new diagnostic measure for multilead ECG (MECG) is proposed. The proposed diagnostic measure (MSD) is defined as the difference between multivariate sample entropy values for original and processed MECG signals. The MSD measure is evaluated over MECG compression framework. Experiments are conducted over both normal and pathological MECG from PTB database. The results demonstrate that the proposed MSD measure is effective in quantifying diagnostic information in MECG. The MSD measure is also compare with other measures such as WEDD, PRD and RMSE.
S. Dandapat, L. N. Sharma, R. K. Tripathy

Chapter 3. ZnO Nanostructures for Alternate Energy Generation

Extensive use of fossil fuel in industries and automobiles has severely polluted the environment, adversely affecting the ecosystem. The fossil fuel reserves are also dwindling, creating a serious concern in the area of energy generation. With rapid advances in nanotechnology, researchers are putting in their efforts to exploit unique properties of nanomaterials to come up with environmentally friendly energy solutions. The abundantly freely available solar energy is undoubtedly the least utilized form of natural energy. Efficient tapping of solar energy can resolve the energy crisis that our world is currently going through. Solar cells developed using nanomaterials, though still at the infancy stage, will be able to harness solar energy quite efficiently and most importantly, will be able to do it very cheaply. Piezoenergy resulting from physical deformation of near-elastic crystals shows promise as energy source for self-powering of low energy consuming devices. This article discusses the possibility of using nanostructures of a very promising material, zinc oxide (ZnO), for energy generation. ZnO is a wide bandgap semiconductor (3.37 eV) and the absence of a central symmetry in its crystal endows it with piezoelectric property. This material has been successfully used for energy generation and tapping schemes like solar cells, hydrogen generators and piezogenerators, among others.
Sunandan Baruah

Accepted Papers

Frontmatter

Chapter 4. Wavelet and Learning Based Image Compression Systems

Image compression is a critical element in storage, retrieval and transmission applications. The list of traditional approaches to image compression has already been expanded by wavelet and learning based systems. Here, we report a few techniques which are based on discrete wavelet transform (DWT), Artificial Neural Network (ANN) in feedforward and unsupervised form. The experiments are repeated with images mixed with salt and pepper noise and the outcomes are compared. The quality of the image compression systems is determined by finding the mean square error (MSE), Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
Mayuri Kalita, Kandarpa Kumar Sarma

Chapter 5. Quantifying Clinical Information in MECG Using Sample and Channel Convolution Matrices

In this paper, a novel distortion measure is presented for quantifying loss of clinical information in multichannel electrocardiogram (MECG) signals. The proposed measure (SCPRD) is defined as the sum of percentage root mean square difference between magnitudes of convolution response of original and processed MECG signals. The convolution operation is performed with the help of proposed sample and channel convolution matrices. The SCPRD measure is compared with average wavelet energy diagnostic distortion (AWEDD) and multichannel PRD (MPRD) measures over different processing schemes such as multiscale principal component analysis (MSPCA) and multichannel empirical mode decomposition (MEMD)-based MECG compression and filtering. The normal and pathological MECG signals from the Physikalisch Technische Bundesanstalt (PTB) database is used in this work. The result shows that the proposed diagnostic distortion measure is effective to quantify the loss of clinical information in MECG signals.
R. K. Tripathy, S. Dandapat

Chapter 6. Application of SHAZAM-Based Audio Fingerprinting for Multilingual Indian Song Retrieval

Extracting film songs from a multilingual database based on a query clip is a challenging task. The challenge stems from the subtle variations in pitch and rhythm, which accompany the change in the singer’s voice, style, and orchestration, change in language and even a change in gender. The fingerprinting algorithm must be designed to capture the base tune in the composition and not the adaptations (or variations which include lyrical modifications and changes in the singer’s voice). The SHAZAM system was developed for capturing cover audio pieces from millions of Western songs stored in the database, with the objective of tapping into the melodic construct of the song (devoid of other forms of embellishments). When applied to the Indian database the system was found less effective, due to subtle changes in both rhythm and melody mainly due to the semiclassical nature of Indian film songs. The retrieval accuracy was found to be 85 %. Potential reasons for the failure of this SHAZAM system have been discussed with examples.
S. Sri Ranjani, V. Abdulkareem, K. Karthik, P. K. Bora

Chapter 7. Multilevel-DWT-Based Image Denoising Using Adaptive Neuro-Fuzzy Inference System

Images corrupted by noise requires enhancement for subsequent processing. Traditional approaches of denoising rely upon spatial, statistical, and spectral properties of image which at times fails to capture the finite details. Discrete wavelet transform (DWT) is a commonly adopted method for image processing applications. Fuzzy-based systems are suitable for modeling uncertainty. In the proposed work, we present a hybrid approach which combines multilevel DWT and adaptive neuro-fuzzy inference system (ANFIS) to capture the benefits of two different domains into a single framework. We apply our algorithm to denoise the images corrupted by multiplicative noise like speckle noise. The results obtained shows that the proposed method proves effective for denoising of images.
Torali Saikia, Kandarpa Kumar Sarma

Chapter 8. Active Learning Using Fuzzy k-NN for Cancer Classification from Microarray Gene Expression Data

Classification of cancer from microarray gene expression data is an important area of research in the field of bioinformatics and biomedical engineering as large amounts of microarray gene expression data are available but the cost of correctly labeling it prohibits its use. In such cases, active learning may be used. In this context, we propose active learning using fuzzy k-nearest neighbor (ALFKNN) for cancer classification. Active Learning technique is used to select most confusing or informative microarray gene expression patters from the unlabeled microarray genes, so that labeling on the confusing data maximizes the classification accuracy. The selected most confusing/informative genes are manually labeled by the experts. The proposed method is evaluated with a number of microarray gene expression cancer datasets. Experimental results suggest that in comparison with traditional supervised k-nearest neighbor (k-NN) and fuzzy k-nearest neighbor (fuzzy k-NN) methods, proposed active learning method (ALFKNN) provides more accurate result for cancer prediction from microarray gene expression data.
Anindya Halder, Samrat Dey, Ansuman Kumar

Chapter 9. A Practical Under-Sampling Pattern for Compressed Sensing MRI

Typically, magnetic resonance (MR) images are stored in k-space where the higher energy samples, i.e., the samples with maximum information are concentrated near the center only; whereas, relatively lower energy samples are present near the outer periphery. Recently, variable density (VD) random under-sampling patterns have been increasingly popular and a topic of active research in compressed sensing (CS)-based MR image reconstruction. In this paper, we demonstrate a simple approach to design an efficient k-space under-sampling pattern, namely, the VD Poisson Disk (VD-PD) for sampling MR images in k-space and then implementing the same for CS-MRI reconstruction. Results are also compared with those obtained from some of the most prominent and commonly used sampling patterns, including the VD random with estimated PDF (VD-PDF), the VD Gaussian density (VD-Gaus), the VD uniform random (VD-Rnd), and the Radial Type in the CS-MRI literature.
Bhabesh Deka, Sumit Datta

Chapter 10. Semi-automatic Segmentation and Marking of Pitch Contours for Prosodic Analysis

Prosody is used in both phonetics and speech synthesis systems in the literature. Pitch information is one of the extensively used prosodic information. This paper aims at semiautomatizing the process of pitch marking for prosodic analysis. Prosody is a suprasegmental information; therefore, it may be wiser to analyze the change in pitch over a segment of voiced speech instead of directly using the pitch calculated from a small window. In a particular voiced segment of speech, pitch may vary from low-to- high, high-to-low, or it may not vary at all. This work describes a method for automatically segmenting speech into certain regions having a continuous pitch contour and marking the nature of pitch change within those regions. Zero frequency filtering is used to segment the speech into voiced and unvoiced segments. This segment is further divided into small segments depending on a discontinuity present in the pitch contour. A height value of the pitch contour in the final segment is measured and accordingly marking is done. Now, automatic segmentation and markings are manually corrected by deleting, inserting, or shifting the segmentation boundaries and substituting the wrong markings. Automatic process is evaluated in terms of these four parameters.
Biswajit Dev Sarma, Meghamallika Sarma, S. R. M. Prasanna

Chapter 11. Semi-automatic Syllable Labelling for Assamese Language Using HMM and Vowel Onset-Offset Points

Syllables play an important role in speech synthesis and recognition. Prosodic information is embedded into syllable units of speech. Here we present a method for semi-automatic syllable labelling of Assamese speech utterances using Hidden Markov Models (HMMs) and vowel onset-offset points. Semi-automatic syllable labelling means syllable labelling of the speech signal when transcription or the text corresponding to the speech file is provided. HMM models for 15 broad classes of phone is built. Time label of the transcription is obtained by the forced alignment procedure using the HMM models. A parser is used to convert the word transcription to syllable transcription using certain syllabification rules. This syllable transcription and the time label of the phones are used to get the time label of the syllables. Now the syllable labelling output is refined using the knowledge of vowel onset point and vowel offset point derived from the speech signal using different signal processing techniques. This refinement gives improvement in terms of both syllable detection as well as average deviation in the syllable onset and offset.
Biswajit Dev Sarma, Mousmita Sarma, S. R. M. Prasanna

Chapter 12. Block Matching Algorithms for Motion Estimation: A Performance-Based Study

Motion estimation (ME) process is the most crucial and time-consuming part of video compression technique. So many block-based motion estimation techniques were developed to make ME easy and fast. In this paper we have reviewed almost all the existing BMA from very old Full Search (FS) to recently developed Reduced Three-Step Logarithmic Search (RTSLS) and Cross Three-Step Logarithmic Search (CTSLS), and so on. We have also compared them based on the computations needed per macroblock and the PSNR value of compensated image. Earlier Adaptive Rood Pattern Search (ARPS) was found to be most computationally efficient but during the review we have applied the old algorithms along with recently developed Zero Motion preadjusted RTSLS (ZMRTSLS) and Zero Motion preadjusted CTSLS (ZMCTSLS) are found to be more computationally efficient than even ARPS.
Hussain Ahmed Choudhury, Monjul Saikia

Chapter 13. Stressed Speech Recognition Using Similarity Measurement on Inner Product Space

In this paper, similarity measurement on different inner product space approach is proposed for analysis of stressed speech. The similarity is measured between neutral speech subspace and stressed speech subspace. Cosine between neutral speech and stressed speech is taken as similarity measurement parameter. It is asssumed that, speech and stress components of stressed speech are linearly related to each other. Cosine between neutral and stressed speech multiples of stressed speech contains speech information of stressed speech. Complement cosine (1-cosine) multiples of stressed speech is taken as stress component of stressed speech. Neutral speech subspace is created by all neutral speech of the training database and stressed speech subspace contain stressed (angry, sad, lombard, happy) speech. From experiment, it is observed that, stress information of stressed speech is not present in the complement cosine (1-cosine) times of stressed speech on different inner product space. The linear relationship between speech and stress component of stressed speech exists only for some specific inner product space. All the experiments are done using nonlinear (TEO-CB-Auto-Env) feature.
Bhanu Priya, S. Dandapat

Chapter 14. An Analysis of Indoor Power Line Network as a Communication Medium Using ABCD Matrices Effect of Loads on the Transfer Function of Power Line

Power line communication is a technique that uses the available power lines as a communication media. In this paper, the power line is considered as a two-wire transmission line and modeled using the transmission, chain, or ABCD matrices. The line is simulated for different conditions commonly found in practical networks and the salient features discussed in detail. It is found that channel shows a deterministic behavior if the complete network is known a priori. However, in practical cases, discrepancies occur due to unavailability of complete information of the channel. This leads to decreased correlation or/and variable attenuation between the theoretical and experimental readings. The effects of these shortcomings on the efficiencies of discrete multitone system commonly used for power line communications are discussed.
Banty Tiru, Rubi Baishya, Utpal Sarma

Chapter 15. Online and Offline Handwriting Recognition of Assamese Language—A Preliminary Idea

Recognition of Online handwriting and machine-printed text is very hard and complex task. In this paper we are discussing some novel approaches for online handwriting and machine-printed text recognition in Assamese language. Assamese language is so much cursive unlike English and some other languages. So to recognize such cursive handwriting we have to follow several steps. In the twenty-first century everything is globalized, so Assamese language needs also to be, so that people can communicate or exchange their idea using Assamese language too. E.g., Sending email/SMS in Assamese, searching Assamese books content via search engine, writing diary using Assamese, etc. So keep in mind these things here in this paper we are discussing some approaches to succeed in the above-mentioned task. Here first we are discussing about the structure of Assamese language, then we discuss some idea for recognition of Online and Offline handwriting.
Keshab Nath, Subhash Basishtha

Chapter 16. Speaker Verification for Variable Duration Segments and the Effect of Session Variability

With the current advancements achieved in the area of speaker verification, a significant performance is obtained under sufficient data conditions. Whereas when there comes a constraint in the amount of speech data, it reflects directly on the performance. This paper presents initial speaker verification studies under variable duration test segments over a standard canned database and, then, studies for variable duration test segments over a database collected from a practical speaker verification system. The latter case helps to explore session variability issues and its impact on speaker verification. This information is used for remodeling of the enrolled speaker models, which in turn improves the system performance significantly.
Rohan Kumar Das, S. R. M. Prasanna

Chapter 17. Two-Dimensional Processing of Multichannel ECG Signals for Efficient Exploitation of Inter and Intra-Channel Correlation

Electrocardiogram signals acquired through different channels from the body surface are termed as Multichannel ECG (MECG) signals. They are obtained by projecting the same heart potential in different directions and hence share common information with each other. In this work a new two-dimensional (2-D) approach is proposed for MECG signal processing in order to exploit the correlated structure between the channels efficiently. Different channel data are arranged in a 2-D form giving them an image type arrangement and then 2-D discrete cosine transform (DCT) is applied in a blockwise manner over the whole data. The 2-D processing of MECG data ensures the efficient utilization of both inter-lead correlation (across the columns) and intra-lead correlation (across the rows). Since neighboring ECG samples across the channels are more correlated due to slowly varying nature of ECGs, blockwise processing of MECG data gives an effective way to exploit this. To quantify the performance of the proposed algorithm, it is evaluated on a compression platform. Each block after DCT transformation is undergone through a uniform scale zero-zone quantizer and entropy encoder to get the compressed bit streams. Performance metrics used are the compression ratio (CR) , and widely used distortion measure, root mean square difference (PRD).
Anurag Singh, S. Dandapat

Chapter 18. DCT-Based Linear Regression Approach for 12-Lead ECG Synthesis

Synthesis of standard 12-lead electrocardiogram from reduced lead set without losing significant diagnostic information is a major challenge. In this work, we propose a patient specific method for synthesizing 12-lead electrocardiogram from reduced lead set by applying linear regression over the DCT domain. The proposed method is evaluated by standard distortion measures such as correlation coefficient, root mean square error, and wavelet energy-based diagnostic distortion. The results shows improvement from the existing systems without loss of significant diagnostic information.
Jiss J. Nallikuzhy, S. Dandapat

Chapter 19. Design, Simulation, and Performance Evaluation of a High Temperature and Low Power Consumption Microheater Structure for MOS Gas Sensors

The purpose of this work is to design, simulate, and to evaluate the performance of a low power microheater for a gas sensing system. A microheater is a microstructure incorporated in a MOS gas sensor in order to elevate the temperature of the sensor to an operating range for the reliable performance of a gas sensor. An approach is made in this work to find an optimized microheater structure by considering different membrane sizes and geometries and taking into account the temperature distribution and power consumption problems. The materials used for the analysis are Platinum and Polysilicon. After analyzing various microheater designs, a novel design is developed by optimization and varying geometry, layer dimension, and materials of the device. For the developed design, thermal profile and power consumption analysis are carried out. The entire work is carried out in COMSOL MULTIPHYSICS 4.2.
Kaushik Das, Priyanka Kakoty

Chapter 20. Experimental Analysis on the Performance of a New Preprocessing Method Used in Data Compression

This paper presents a new text transformation method, which has a few similarities with the StarNT text transformation method. StarNT is a dictionary-based lossless text transform algorithm. Many different compression methods have been devised by researchers to find a suitable solution of data transmission that utilises the entire network bandwidth optimally and which also achieves a higher compression ratio. Most of the approaches that are being used, like the Prediction by Partial Matching (PPM), Burrows–Wheeler Transform have been unable to achieve the best possible output as provided by theoretical calculations and hence have left researchers to find more efficient techniques of text compression. Further in this paper we also provide the experimental results of the timing performance and space utilisation by compression of our algorithm by comparing with StarNT method.
P. Khanikar, M. P. Bhuyan, R. R. Baruah, H. Sarma

Chapter 21. Modeling a Nano Cylindrical MOSFET Considering Parabolic Potential Well Approximation

In this paper an analytical surface potential-based model considering the quantum mechanical effect at the semiconductor-oxide interface of a nanoscale cylindrical MOSFET is developed. The model considers the decoupling of the Poisson’s and Schrodinger’s equations via parabolic potential well approximation instead of fully self-consistent approach. Using the developed model, the effect of variation on surface potential, threshold voltage, drain current, with the extension into the saturation regime alongwith the variation of substrate doping, silicon pillar diameter, drain to source voltage, and gate to source voltage are observed. While obtaining the results, a large discrepancy in the device characteristics from the classical analysis is seen and this proves the need for quantum analysis to be done for highly doped substrates.
Jyotisikha Deka, Santanu Sharma

Chapter 22. SVD Analysis on Reduced 3-Lead ECG Data

This paper presents synthesis of Electrocardiogram (ECG) leads from reduced set of leads. The Singular Value Decomposition (SVD) is used to train subject-specific all desired leads for minimum of three beat periods. Then, in the testing phase, only 3-leads are used to reconstruct all other leads. The singular value matrix of the reduced 3-lead data is transformed to a higher dimension using a transform matrix. For evaluation purpose, the proposed method is applied to a publicly available database. It contains number of 12-lead ECG recordings with different cardiac patients data. After synthesis of ECG data, the performance of the method is measured using percent correlation present between the original and synthesized data.
Sibasankar Padhy, S. Dandapat

Chapter 23. Large Library-Based Regression Test Cases Minimization for Object-Oriented Programs

Large library-based regression test cases minimization technique for object-oriented programs has been depicted in this paper. These works have been carried out in three steps. In the first step, the original program is instrumented and executed with test cases. Library is made on the basis of these test cases, coverage of codes, and then the program is modified. In the second step, the modified program is analyzed by latent semantic analysis. It is making the latent semantic matches automatically between users given values and linear combination of its small text objects or variables or database of the software. Therefore, modified code is recorded by latent semantic analysis. Data flow sensitivity and context sensitivity are used for statically and dynamically analyzing the affected and unaffected objects along with the recorded modified codes. After precision data flow analysis, test cases are generated from affected objects with same test cases coverage and affected objects with new test cases coverage. Therefore, redundant test cases are reduced by new optimal page replacement algorithm and updated the library along with code coverage records. In the third step, the test cases of former and modified program are collected and sent to the test cases repository. Now the new optimal page replacement algorithm is implemented on the test cases repository and reduced the regression test suites. An Illustrative example has been presented to establish the effectiveness of our methodology.
Swapan Kumar Mondal, Hitesh Tahbildar

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise