Skip to main content
Top

2019 | Book

Handbook of Multimedia Information Security: Techniques and Applications

insite
SEARCH

About this book

This handbook is organized under three major parts. The first part of this handbook deals with multimedia security for emerging applications. The chapters include basic concepts of multimedia tools and applications, biological and behavioral biometrics, effective multimedia encryption and secure watermarking techniques for emerging applications, an adaptive face identification approach for android mobile devices, and multimedia using chaotic and perceptual hashing function.

The second part of this handbook focuses on multimedia processing for various potential applications. The chapter includes a detail survey of image processing based automated glaucoma detection techniques and role of de-noising, recent study of dictionary learning based image reconstruction techniques for analyzing the big medical data, brief introduction of quantum image processing and it applications, a segmentation-less efficient Alzheimer detection approach, object recognition, image enhancements and de-noising techniques for emerging applications, improved performance of image compression approach, and automated detection of eye related diseases using digital image processing.

The third part of this handbook introduces multimedia applications. The chapter includes the extensive survey on the role of multimedia in medicine and multimedia forensics classification, a finger based authentication system for e-health security, analysis of recently developed deep learning techniques for emotion and activity recognition. Further, the book introduce a case study on change of ECG according to time for user identification, role of multimedia in big data, cloud computing, the Internet of things (IoT) and blockchain environment in detail for real life applications.

This handbook targets researchers, policy makers, programmers and industry professionals in creating new knowledge for developing efficient techniques/framework for multimedia applications. Advanced level students studying computer science, specifically security and multimedia will find this book useful as a reference.

Table of Contents

Frontmatter

Multimedia Security

Frontmatter
Chapter 1. Introduction to Multimedia Tools and Applications

In recent days, there is an increasing demand in the transmission of multimedia data in several computer vision based applications. A noteworthy part required in multimedia applications is a computer with high processing rate and extensive stockpiling limit. In this chapter, we made an attempt to summarize the overview of multimedia, its application in various domain, etc. In addition, a detailed explanation of the issues present in the multimedia tools and applications is also discussed. Over the entire domain like transportation of huge volumes of media produced data, data management, synchronization, retrieval are the novel issues that are rising nowadays with the enhanced ease of access and availability to electronic multimedia data.

Abdul Rahaman Wahab Sait, J. Uthayakumar, K. Shankar, K. Sathesh Kumar
Chapter 2. An Overview of Biometrics Methods

Biometrics is becoming an important technology in automated person recognition. With the help of biometrics, the individuals are recognized through their unique characteristics and behaviors of various body parts. Some most famous biometrics techniques include the recognition of face, finger prints, iris, gate and signature. This chapter encompasses various biometrics methods used by researchers till date. The chapter depicts the biometrics under various categories such as biological and behavioral biometrics. This will help the readers to consider various biometrics while designing human recognition systems. Apart from the benefits, biometrics is also susceptible to hacking. The authors’ findings with benefits and drawbacks of biometrics are also discussed in this chapter.

Muhammad Sharif, Mudassar Raza, Jamal Hussain Shah, Mussarat Yasmin, Steven Lawrence Fernandes
Chapter 3. SIE: An Application to Secure Stereo Images Using Encryption

To provide the security to image contents, most encryption techniques are designed for synthetic and real images which is, however, not applicable to stereo images. Stereo images generally consists of two views (left and right views) of a scene with known viewpoints. The only way is to encrypt both left and right images separately. To address this issue, this chapter introduces a new and effective encryption technique to encrypt left and right stereo images simultaneously to produce a single encrypted image. A decryption process is finally introduced to generate left and right stereo images from the final encrypted image. The extensive experiments on different stereo images demonstrate the efficiency and the robustness of the proposed encryption technique.

Sanoj Kumar, Gaurav Bhatnagar
Chapter 4. Example Based Privacy-Preserving Video Color Grading

The integration of cloud computing and smart multimedia gadgets has made an attractive business model today. However, data privacy is one of the major concern when moving to third party driven infrastructures like cloud. Furthermore, due to diverse camera sensors, the captured multimedia may contain insufficient lightning/colors and processing them manually is a painstakingly task. A few schemes have been proposed to address this problem, however they suffered from the major drawback of computational and storage overhead, and becomes non-trivial in case of videos. Considering these challenges, we propose an automatic video color grading approach in this chapter. The proposed approach enables cloud data center to process encrypted multimedia data by transferring its colors as per an example image as the reference. We analyze the correlation between consecutive video sequences and propose to evaluate the color transformation parameters for every alternate video frame. In addition, proxy encryption based Paillier cryptosystem has been used for video encryption. As a result, the computational and storage overheads are drastically reduced with effective video grading results. The feasibility and robustness of the proposed approach are validated through various tests.

Amitesh Singh Rajput, Balasubramanian Raman
Chapter 5. A Novel Watermarking Technique for Multimedia Security

This chapter presents a robust and secure framework for multimedia security using digital watermarking. In the proposed scheme, a cover image is transformed into frequency domain based on all phase discrete bi-orthogonal transform (APDBT) followed by singular value decomposition. A gray-scale watermark is then embedded by modifying the singular values. For watermark extraction, a new procedure based on dynamic stochastic resonance (DSR) is employed. The proposed DSR based extraction effectively utilizes the noise introduced during the attacks to enhance the robustness and authenticity of the watermark. A detailed experimental analysis is finally conducted to demonstrate the robustness and efficiency of the proposed scheme against a variety of attacks.

Satendra Pal Singh, Gaurav Bhatnagar
Chapter 6. A Secure Medical Image Watermarking Technique for E-Healthcare Applications

The rapid rise in the exchange of the medical information for remote/continuous diagnosis via internet using different software’s and digital gadgets has ensured better health monitoring. This exchange, however, has given rise to many security issues which may occur due to noise in communication channels, mishandling of devices or unauthorized tempering. Thus, there is need of ensuring privacy, data integrity and copyright protection in order to provide better technological efficiency. For securing medical content digital watermarking techniques have proved to provide better solutions. In this chapter an efficient watermarking technique is proposed for exchanging patient information in an e-healthcare system. A robust watermarking technique using Singular Value Decomposition (SVD) for embedding the secret logo in the transform domain coefficients is presented. The hybrid combination of Discrete Cosine Transform (DCT) and SVD ensures very high robustness of the watermark and hence is suitable for copyright applications. The security has been ensured by encrypting the watermark before embedding by chaos encryption. The proposed technique has been analysed in presence of various signal processing/geometrical attacks and shows very high robustness of the watermark. The experimental results obtained prove the efficiency of the technique and as such can be used in e-healthcare.

Nasir N. Hurrah, Shabir A. Parah, Javaid A. Sheikh
Chapter 7. Hybrid Transforms Based Oblivious Fragile Watermarking Techniques

With the advancement in information technology, the illegal use, manipulation and tampering of the digital data has become very easy. Due to these threats, security of the digital data has become very important. Digital watermarking is one of the approach which is used to handle the threats to the digital data. The watermark is a secret data to be embedded into any digital media like image, video, audio or text file which is further used for copyright protection and authentication of the data. The image in which the watermark is embedded, is termed as cover image and after embeddeding a cover image is termed as watermarked image. A general digital watermarking technique consists of a embedding and extraction algorithms, as shown in first figure of this chapter. The embedding algorithm embeds a watermark into the cover image and the watermarked image is generated. The watermarked image is further processed by an extraction algorithm to extract the watermark when required. To check the features such as robustness, fragility etc. attacks are executed on the watermarked image and then the watermark is extracted from it.

Geeta Kasana
Chapter 8. Performance Analysis of Invariant Quaternion Moments in Color Image Watermarking

In the last decade, the invariant quaternion moment-based watermarking methods were successfully used due to their robustness against the geometric attacks. In this chapter, a performance analysis of the invariant quaternion moment-based methods for color image watermarking is presented. An extensive study of the color image watermarking using a set of quaternion moments. In this comparative study, a unified numerically stable method is utilized for computing accurate quaternion color moments in polar coordinates where the angular kernel is computed over circular pixels using analytical integration. The radial kernels are computed using accurate Gaussian quadrature method. In these watermarking methods, better characteristics of the various quaternion moments such as their capabilities in reconstructing high quality watermarked images, computational complexity, accuracy and stability are considered. Moreover, evaluation criteria are selected carefully to evaluate the performance of the watermarking methods in terms of visual imperceptibility and robustness against different attacks. Experiments are performed where the obtained results are used to analyze the performance of the various quaternion moment-based watermarking methods.

Khalid M. Hosny, Mohamed M. Darwish
Chapter 9. Security of Biometric and Biomedical Images Using Sparse Domain Based Watermarking Technique

The biomedical images and biometric images is composed of vital wellbeing data, critical unique identity and conduct data of human. Hence, pictures identified with these two data types must be kept secret and must be secured over transmission medium. In this chapter, a new sparse domain image watermarking is proposed, performance examined and correlated with the existing watermarking systems. The proposed technique utilizes the sparsity property of Discrete Wavelet Transform (DWT) and Compressive Sensing (CS) hypothesis procedure to accomplish high strength and security. This technique hides secret watermark data into encoded cover image rather than the frequency coefficients of the original cover image. The scrambled cover image is created from CS hypothesis. In this method, different kinds of biomedical images and ear biometric image are used as cover images and a binary logo is utilized as watermark. The logo is implanted into sparse measurements of cover image using noise sequences and constant gain factor to achieve blind extraction of watermark image. The CS hypothesis guarantees security to cover picture and is safe against different watermarking attacks. Exploratory outcomes demonstrated that the proposed system gives strength against different sorts of image processing attacks in term of normalized correlation (NC).

Rohit Thanki, Surekha Borra, Deven Trivedi
Chapter 10. Performance Analysis of Image Encryption Methods Using Chaotic, Multiple Chaotic and Hyper-Chaotic Maps

Image Encryption is widely used to construct sensitive hidden information through insecure public networks which could only be accessed by the receiver. The plain image is encrypted using chaotic maps and cipher image is produced thereon. Symmetric keys are used for encrypting and decrypting the image. The challenges involved in the image encryption schemes are one dimensional logistic map having periodic windowing, selection of control parameter values, unsuitable key stream generation, more number of rounds in image encryption and image encryption with limited randomness. These issues are properly addressed and solved by using multiple chaotic maps and multiple hyper-chaotic maps through encryption techniques. Image encryption using multiple chaotic maps enhances correlation and entropy level to establish uniform distribution of histogram and differential attack analysis. The performance analysis metrics such as key space analysis, histogram analysis, correlation coefficient analysis, differential attack analysis information entropy analysis etc., are conducted and compared. The experimental analysis proves that the hyper-chaotic map based encryption method is more resourceful than the chaotic based encryption.

T. Gopalakrishnan, S. Ramakrishnan
Chapter 11. Perceptual Hash Function for Images Based on Hierarchical Ordinal Pattern

Distinguishing between the original and manipulated image is now a major issue as digital media can be easily manipulated, due to the advancement of the image processing techniques. A scheme is required to check the integrity of the digital multimedia. Another issue is efficient indexing and retrieval for multimedia data. Traditional indexing methods are time consuming and inefficient. Huge amount of data are generated by the users due to the growth of the Internet, which leads to storing and transmission of large volume of digital data. A perceptual hashing function is an effective solution to provide protection, integrity and authentication. Perceptual hashing function needs to be robust against geometric attacks and distinguish between perceptually different data. A robust perceptual image hash function is being proposed based on ordinal pattern, which were generated hierarchically.

Arambam Neelima, Kh. Manglem Singh
Chapter 12. Hash Function Based Optimal Block Chain Model for the Internet of Things (IoT)

In recent decades, the Internet of Things (IoT) is transforming into an attractive system to drive a substantive hop on stock and enterprises through physical, computerized, and social spaces. For enhancing the security level of IoT multimedia data, a block-based security model is proposed. At first, from the collected database some clusters are formed for the high-security process; where selecting an optimal group of data as cluster heads to divide the network is done by Dragonfly Optimization Algorithm (DOA). After forming the clusters, a hash function with blockchain technique is utilized to the secure the IoT information. Initially, the data is changed over into many blocks then applies a hash function to each block for an end to end blockchain model. Based on this function the information’s are encrypted and decrypted stored in the cloud by an authenticated person. From the implementation results of data security, the execution time and security levels are analyzed.

Andino Maseleno, Marini Othman, P. Deepalakshmi, K. Shankar, M. Ilayaraja
Chapter 13. An Adaptive and Viable Face Identification for Android Mobile Devices

Smartphones apart from enjoying access to personal data, are increasingly being used for performing sensitive and critical financial transactions. Thus, making smartphones vulnerable to numerous contemporary threats as strong security solutions were not developed while considering resource-constrained devices like mobile phones in mind. A need for such a security solution persists that is capable of delivering strong security without compromising user convenience. Biometric tends to offer unparalleled user convenience. Although sparse usage of the face and fingerprint biometrics appear on mobile phone devices. However, their application is limited to mere device unlocking. The low accuracy offered by such solutions results in low user acceptance and limits their use in other security solutions. Therefore, it is evident that the recognition accuracy has to be inspected and improved to deal with the real-world situations. In this chapter, an adaptive face identification capable of minimizing the variations of real-world uncontrollable situations has been developed primarily on Android mobile devices while investigating the state of art algorithms in the field of face identification; Face Detection: Haar detector, Local Binary Patterns (LBP) detector; Face Identification: Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Patterns Histograms (LBPH).

Tehseen Mehraj, Burhan Ul Islam Khan, Rashidah F. Olanrewaju, Farhat Anwar, Ahmad Zamani Bin Jusoh
Chapter 14. Realization of Chaos-Based Private Multiprocessor Network Via USART of Embedded Devices

Data exchange in any form needs security irrespective of the mode and medium being used. Due to the owing fact of high sensitivity, chaotic functions have been widely used in cryptography. Serial data communication using USART is being the cost-effective and efficient means for data transfer among the embedded devices. Secrecy of data shared between any two devices via USART in a multiprocessor network relies only on the honesty of other devices in rejecting the visible data that are not addressed to them. This chapter aims at providing a scheme to establish a secure private communication channel between two nodes in a wired network of embedded devices connected via their on-chip USART. A chaotic digital function implemented on AVR microcontroller produces a symmetric key, which encrypts the data and varies the baud rate of USART to act as a major synchronizing factor between the sending and receiving embedded devices. Optimal values of code size, execution time and throughput were obtained from the realisation of the proposed work on an embedded network with on-chip USART of AVR microcontrollers. Based on the features and analyses, the proposed work shows significant benefits over the similar implementations available in the literature.

Siva Janakiraman, K. Thenmozhi, John Bosco Balaguru Rayappan, V. Moorthi Paramasivam, Rengarajan Amirtharajan
Chapter 15. Robust and Secure Hiding Scheme for Open Channel Transmission of Digital Images

Data protection in transmitting online message is a prime concern in present scenario. Preventing the illegal copying and protect the secret data is important. For this information hiding scheme and Data encryption techniques are major tools and several techniques developed for data protection. In this work, DWT and SVD Image watermarking Algorithm has been proposed in which transform is applied to watermarked image to ensure the robustness of Watermark against attacks. The proposed algorithm offer a minimum distortion against various attacks like Crop Attack, Rotation Attack, Mean Attack, Median Attacks, Salt and Pepper Attack, Shear Attack. The significant advantages can be obtained by the new techniques that are DWT-SVD watermarking Techniques to the digital watermarking field and provides benefits to the copyright protection and Data security.

Harsh Vikram Singh, Purnima Pal

Multimedia Processing

Frontmatter
Chapter 16. Image Processing Based Automated Glaucoma Detection Techniques and Role of De-Noising: A Technical Survey

This chapter presents a detailed study of the image processing steps to identify glaucoma including the key role of the de-noising in the detection of Glaucoma. De-noising plays an important role in the area of medical imaging. One of the major applications of image processing is detection of retinal diseases. Further, important diagnostic parameters to detect glaucoma are discussed in detail. Several techniques with different diagnostic parameters are used to detect glaucoma. Image acquisition is the first step in this detection process. Existing noise in the medical image may degrade the accuracy of the detection. Therefore a preprocessing step is highly required before the commencement of actual processing. In general, optical coherence tomography (OCT) and Ultrasound retinal image are corrupted by speckle noise. The speckle noise removal techniques are reviewed. The popular de-speckling approaches are classified into different groups and a brief overview is provided. The application of these de-noising methods outperforms in diagnosing the progression of glaucoma.

Sima Sahu, Harsh Vikram Singh, Basant Kumar, Amit Kumar Singh, Prabhat Kumar
Chapter 17. A Study on Dictionary Learning Based Image Reconstruction Techniques for Big Medical Data

Nowadays, Dictionary Learning (DL) based reconstruction techniques plays a significant role in the quality of CT image reconstruction. The basic principle behind all the reconstruction algorithm is to reconstruct acceptable images from the noisy and incomplete sparse datasets collected from the different projection views around the object (patient). Generally, the amount of data collected during the acquisition process suffers by large-scale matrix factorization problem. To analyze or solve this sparse representation of training images signals into a compressed form without amplifying the noise has proven to be a more difficult task. Dictionary Learning (DL) is an efficient algorithm to optimize and to present the desired output for accurate clinical diagnosis. The work presented in this work mainly focuses on the comprehensive study of both the basic and advanced aspects of DL reconstruction algorithms for analyzing the big medical data. Also, presents the extensive literature survey of existing state-of-the-art researches by knowledgeable authors that discuss the pros and cons with some conclusive remarks.

Shailendra Tiwari, Kavkirat Kaur, K. V. Arya
Chapter 18. Quantum Image Processing and Its Applications

Quantum computers are capable of processing exponentially large volumes of data in polynomial time. Quantum algorithms have created its impact in a wide range of areas, starting from simulating quantum physical systems to mathematics, cryptography information and language theory etc. Quantum image processing is a subcategory that concentrates on converting the traditional image processing algorithms for quantum computing environments. This chapter gives a brief introduction to the basics of quantum computing. It mainly focusses on quantum image representation and quantum preprocessing algorithms. Flexible representation provides a normalized state that is used for capturing the probability amplitude of the qubit containing the color information and their respective positions. The pixel intensity is stored using the basis state of the qubit by the novel enhanced quantum representation. This chapter also focusses on the Quantum image processing algorithms like filtering, edge detection, and scrambling. Matlab implementation of the quantum algorithms are also provided.

J. J. Ranjani
Chapter 19. 3-D Shape Reconstruction Based CT Image Enhancement

In medical science, Computed Tomography is one of the powerful tools to reconstruct 3D image with measuring stack of parallel slices, where Radon transform is useful to get image slices by set of line integrals. 3D reconstructed images are very helpful to analyze multiple abnormalities in comparison of 2D images. Only CT scanned images is not sufficient to get the exact position and surface information of patients. This paper is dealing with the problem of direct 3D-reconstruction of medical images. We examined 3D surface reconstruction over 2D-CT scanned image and also tested over the simulated image. Shape from shading (SFS) technique is useful to get the geometric information for recover features from variation of shading. In this paper, Fast Marching Method (FMM) is used to get the 3D surface over the medical images (i.e., real image and synthetic image).

Manoj Diwakar, Pardeep Kumar
Chapter 20. A Segmentation-Less Efficient Alzheimer Detection Approach Using Hybrid Image Features

Alzheimer Disease (AD) is progressive mental deterioration disease results in memory loss issues. Alzheimer is not treatable at the severe stage, but an accurate and earlier identification can aid signs and have great clinical significance. In this chapter, Machine Learning techniques are utilized for the identification purpose for Alzheimer patients. The proposed method extract features from Magnetic Resonance Images (MRI) without segmentation. An accuracy of 94.2% is attained for multiclass classification using random forest approach. The results prove that there is no need for segmentation and hence the process gets robust while keeping the accuracy high. The proposed method is evaluated on OASIS dataset. The computational time for Alzheimer detection is 205 ms and 56 ms for segmentation-based detection and the proposed method respectively.

Sitara Afzal, Mubashir Javed, Muazzam Maqsood, Farhan Aadil, Seungmin Rho, Irfan Mehmood
Chapter 21. On Video Based Human Abnormal Activity Detection with Histogram of Oriented Gradients

Video based activity analysis has applications in surveillance and monitoring, and in recent years various automatic analysis techniques were used to obtain efficient detection of activities. As surveillance cameras are becoming ubiquitous, automatic human activity analysis and abnormal activity recognition in videos are required to handle the big streaming data from these sensors. However, automatic video examination remains challenging due to inter-object occlusions in big crowded scenes, unconstrained motions of people’s activities, and also the quality limitation of acquired videos. Further, to attain robust detection of abnormal actions from surveillance videos there needs to be a separate module of computations that can handle vague, infrequent occurring abnormal activities from noise. In this chapter, we propose a method, that can recognize abnormal activities based on the histogram of oriented gradients approach, that can handle the imprecise visual observations, as well as overcome irregularity of other activity recognition approaches. We develop a static camera human detection with local feature extraction and use Support Vector Machine (SVM) for classification of abnormal activity detection from videos. Experimental results indicate the promise of our approach in with good precision with less false positive activity frames.

Nadeem Iqbal, Malik M. Saad Missen, Nadeem Salamat, V. B. Surya Prasath
Chapter 22. Enhancement and De-Noising of OCT Image by Adaptive Wavelet Thresholding Method

This chapter proposed an adaptive wavelet thresholding method for enhancement and de-noising of retinal optical coherence tomography (OCT) image. Speckle noise degrades the OCT image and affects the disease diagnostic utility. OCT image enhancement is required for accurate analysis of inter and intra retinal layers. Enhancement is achieved through histogram mapping called Gaussianization transform. Further wavelet coefficients are modeled statistically to get the signal and noise information for finding the threshold value for weighing the wavelet coefficients. A Cauchy distribution is used to model the wavelet coefficients. An adaptive soft thresholding is used to estimate the true wavelet coefficients. Gaussianization transform widen the intensity range and enhances the OCT image and de-noising performances. Through different performance parameters, it is demonstrated that the proposed method outperforms the state-of-the-art methods. The proposed de-noising method has achieved 4.67% improvement in Peak Signal-to-Noise Ratio (PSNR), 2.61% in Structural Similarity (SSIM), 1.33% in Correlation coefficient (CoC) and 9.4% in Edge Preservation Index (EPI) parameters than the adaptive soft thresholding method, designed without statistical modeling.

Sima Sahu, Harsh Vikram Singh, Basant Kumar, Amit Kumar Singh, Prabhat Kumar
Chapter 23. Quantization Table Selection Using Firefly with Teaching and Learning Based Optimization Algorithm for Image Compression

In the recent days, the importance of image compression techniques is exponentially increased due to the generation of massive amount of data which needs to be stored or transmitted. Numerous approaches have been presented for effective image compression by the principle of representing images in its compact form through the avoidance of unnecessary pixels. Vector quantization (VA) is an effective method in image compression and the construction of quantization table is an important process is an important task. The compression performance and the quality of reconstructed data are based on the quantization table, which is actually a matrix of 64 integers. The quantization table selection is a complex combinatorial problem which can be resolved by the evolutionary algorithms (EA). Presently, EA became famous to resolve the real world problems in a reasonable amount of time. This chapter introduces Firefly (FF) with Teaching and learning based optimization (TLBO) algorithm termed as FF-TLBO algorithm for the selection of quantization table. As the FF algorithm faces a problem when brighter FFs are insignificant, the TLBO algorithm is integrated to it to resolve the problem. This algorithm determines the best fit value for every bock as local best and best fitness value for the entire image is considered as global best. When these values are found by FF algorithm, compression process takes place by efficient image compression algorithm like Run Length Encoding and Huffman coding. The proposed FF-TLBO algorithm is evaluated by comparing its results with existing FF algorithm using a same set of benchmark images in terms of Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Structural Similarity index (SSIM), Compression Ratio (CR) and Compression Time (CT). The obtained results ensure the superior performance of FF-TLBO algorithm over FF algorithm and make it highly useful for real time applications.

D. Preethi, D. Loganathan
Chapter 24. Wavelet Packet Based CT Image Denoising Using Bilateral Method and Bayes Shrinkage Rule

Noise in CT images is very common problem which degrade the quality of CT images. In this paper, a method is proposed where CT images are denoised using bilateral method with the concept of Bayes Shrinkage rule in wavelet domain. Firstly, noisy CT image is filtered using bilateral filter. This filtered image is suppressed by noisy input image. Over the subtracted image, wavelet packet based thresholding is performed. To get the final denoised image, the thresholded output image is added with bilateral filtered image. To analyze the efficiency of propose algorithm, a comparative result analysis has been performed with some recent similar methods and also from some state of arts. The comparative analysis indicates that in most of the cases, the proposed algorithm gives better results.

Manoj Diwakar, Pardeep Kumar
Chapter 25. Automated Detection of Eye Related Diseases Using Digital Image Processing

This chapter presents techniques for automated detection of various eye diseases using digital image processing. Human eyes suffer from variety of abnormalities due to aging, trauma and disease like diabetes. The leading causes of blindness throughout the world are cataract, glaucoma, macular degeneration, diabetic retinopathy, retinal detachment and diabetic macular edema. These eye diseases are detected and diagnosed by ophthalmologist and trained technicians. The imaging systems needed for detection of abnormalities are ophthalmoscopy, fundus photography, optical coherence tomography, ultrasound imaging, and Heidelberg retinal tomography. In developing countries like India, lack of eye care centres and non-availability of ophthalmologists are very common in rural and remote areas. Early detection, followed by appropriate medical treatment of various eye diseases can solve this problem to a large extent. Automated detection of eye diseases through analysis of different types of medical images provides a better alternative for timely diagnosis and treatment of the eye diseases. In general, the steps involved in image processing based automated diagnostic techniques are image acquisition, pre-processing, extraction of region of interest, feature extraction, and classification. The need for automated diagnosis system, study of various imaging techniques, current status and brief explanation of various eye diseases detection algorithms have been discussed in this chapter.

Shailesh Kumar, Shashwat Pathak, Basant Kumar
Chapter 26. Detector and Descriptor Based Recognition and Counting of Objects in Real Time Environment

The high resolution digital cameras made a notable impression on which manner conveyance system are developing during the last few years. But making a computer system acquit oneself similar to human vision system is always a challenging task. Computer Vision is an innovator to fulfill this challenging task. Viola-Jones is one of the object detection methods which provide competitive object detection in actual-time. In this paper we describe the features extraction and AdaBoost algorithm used in Viola-Jones detection method to build efficient cascade classifier for effective object detection.

Harsh Vikram Singh, Sarvesh Kumar Verma

Multimedia Applications

Frontmatter
Chapter 27. Role of Multimedia in Medicine: Study of Visual Prosthesis

The role of multimedia in the field of medicine is increasing at a tremendous rate. Medical images and videos are used for the documentation and monitoring of progression of diseases as well as the response to treatment strategies. Digital images such as X-RAYS, CT-SCANS and MRI are used for diagnosis of various diseases. With the analysis of these images, the doctor is able to understand the nature and extent of the damage. Although the main use of multimedia in medicine is of diagnostic nature, few applications are therapeutic. The human being is blessed with an impeccable visual system through which he is able to perceive the world. The eye, known as the “organ of sight” forms the window between the external environment and the brain. The mechanism of vision involves the light, which after striking the object enters the eye and stimulates the photoreceptor cells. A complex and poorly understood series of reactions is set into motion, wherein an electrical stimulus is generated and relayed towards the brain. This sequence is the “visual pathway”. However, due to diseases affecting various parts of the visual pathway, there can be a decrease in the perceived light and even partial or total blindness. Some of these diseases have absolutely no cure and the patient is left in despair. These patients are potential candidates for a visual prosthesis. A visual prosthesis is a type of an electronic device which tries to provide vision to those who are partially or totally blind. The process requires continuous capturing of the image frames of a particular scene and then stimulating certain implanted electrodes in accordance with the image being taken. With the efficient real-time processing of the images, the person is able to see a steady view of the external environment. The aim of this work is to provide insight into the various types of visual prosthesis that have been developed. It will also highlight the engineering details and the underlying principle of operation of the only FDA approved retinal implant called ARGUS II. Although there are multiple types of visual prosthetics like the ASR or the Alpha IMS etc, there is only one device THE ARGUS II which is the product of the Second Sight Company that is available to patients for use and is characterized as a humanitarian device. The device is the product of years of scientific research and is very sophisticated, having the latest technology and most efficient components. However, the underlying principle of operation, described by Dr. Mark Humayun with a team of other experts, is the MARC (Multiple artificial retinal chipset) system. It is a system with multiple components that work together synchronously to provide useful vision to patients. The MARC system has internal and external components and an electrode array placed onto the retinal surface which makes it an epi-retinal implant. This chapter mainly describes the working of an epi-retinal implant like ARGUS II and how it is used to provide functional vision to patients.

Parsa Sarosh, Shabir A. Parah, Rimsha Sarosh
Chapter 28. Finger Biometrics for e-Health Security

Driven by the needs of several health care organizations to offer better health care services in the economic and convenient way, electronic Health (e-Health) has modernized the health care commerce. e-Health security issues are mainly centered on user authentication, data integrity, data confidentiality, and patient privacy protection. Biometrics technology addresses the above security problems by providing reliable and secure user authentication compared to the traditional approaches. Motivated by trustworthiness of biometrics, we suggest a finger based authentication system which can have good scope in health security. The finger dorsal skin and vein patterns are largely considered as unique to humans, serve as the modern basis of forensic science and have been employed in various commercial applications. The contact-less acquisition of finger under visible or infrared light have been used to establish identity of individuals and commonly referred to as the finger knuckle and finger vein identification. The chapter concludes that biometrics technology has considerable opportunities for application in e-Health due to its ability to provide reliable security solutions.

Gaurav Jaswal, Aditya Nigam, Ravinder Nath
Chapter 29. ECG Security Challenges: Case Study on Change of ECG According to Time for User Identification

Each person has unique bio-information such as: a face, a fingerprint, an iris, which are forms of static information and many systems have been trying to use them in their security systems, like a banking system. However, because they are just static information, which are never changing, they could be abused by replacing them with an attacker’s bio-information. To overcome this, dynamic bio-information, such as an Electrocardiogram (ECG), can be used in the next forms of security systems. One problem is that the dynamic bio-information is always different according to their state of health, evaluating time, moreover, their daily condition when they are evaluated. Therefore the security system can’t accept and pass with two different values. So, to use the ECG value in the security system, it tries to detect the ECG’s feature and tries to connect each relationship.

Hoon Ko, Libor Mesicek, Sung Bum Pan
Chapter 30. Analysis of Streaming Data Using Big Data and Hybrid Machine Learning Approach

A lot of data is generated from multiple sources. This data contains many hidden patterns and information. Data from Social Networks mostly contains opinions. These opinions can be mined to lead various extractions from organizational point of view. In this chapter, the authors are storing the Twitter Streaming Data into HDFS of Hadoop by using Flume and then extracting with Apache Hive. Later, Machine Learning classification algorithms are applied to decode the sentiment in this data. A novel approach based on hybrid Naïve Bayes and Decision Tree Algorithms are used to enhance the performance of sentiment analysis of streaming twitter data. Naïve Bayes is a powerful and simple classification algorithm. But it assumes independence of features. So, Decision Tree has been used in conjunction with it to get more accurate result. Decision Tree has some rules. Algorithms are combined using Averaging Rule. The implemented research approach achieved an accuracy of 86.44% in comparison to 81.11% for Naïve Bayes Classifier.

Mamoon Rashid, Aamir Hamid, Shabir A. Parah
Chapter 31. An Efficient Lung Image Classification Using GDA Based Feature Reduction and Tree Classifier

In recent days, one of the malignant diseases among different tumors is lung cancer. The accessible diagnosing methods and the current effects of cancer treatment are unsatisfactory. For that reason, we introduce innovative diagnostic techniques which classify cancer affected portion from the lung image at an early stage. In the study, an excellent image classification system is proposed to detect and classify the lung images as normal and abnormal. In the initial phase of our work, the lung images are fed to the preprocessing module using Histogram Equalization to remove noise and gain the clarity of the image. In addition to this, feature extraction techniques are applied and then it is reduced to the best subset of features using Generalized Discriminant Analysis (GDA). Here, the lung image classification is done by four different classifiers such as K-Nearest Neighbor (KNN), Naïve Bayes (NB), Neural Network (NN) and Random Forest (RF). The performance measures of these classifiers are analyzed and compared with one another. The results demonstrated that the RF-GDA technique accomplishes maximum classification accuracy compared to existing classification approaches.

K. Vasanthi, N. Bala Kumar
Chapter 32. Deep Neural Networks for Human Behavior Understanding

Human behavior understanding techniques are proposed for several applications likewise object recognition, face detection, emotion detection, action detection, finger print identification, gait recognition, voice recognition, etc. Emotion and action recognition are the most popular applications among them. This chapter presents an analysis of recently developed deep learning techniques for emotion and activity recognition. Existing approaches are discussed that use deep learning as their core component. Experimental results are reported on benchmark datasets i.e. CK+ and SFEW datasets for emotion recognition, and Skoda and UCF 101 datasets for activity recognition. Experimentation shows that deep learning methods outperform other existing techniques in literature and demonstrate great performance.

Rajiv Singh, Swati Nigam
Chapter 33. Digital Image Forensics-Gateway to Authenticity: Crafted with Observations, Trends and Forecasts

In today’s digital savvy world, we are more convinced from images as they are the most commonly transmitted information means via the internet. However, in the present days, easy accessibility of very low-cost or free multimedia software such as Adobe PhotoShop, GIMP; having a large number of manipulating features, create hindrance to achieve authenticity as well integrity of this digital data. So, the need of a solid remedy for the above problems has brought an enthusiasm for Forensics. This chapter will be quite good for beginners and a nice review for experienced forensic examiners. The focus of this chapter is to provide the researchers the recent trends in the fields of digital image forensics, which are required to achieve necessary knowledge about that field of forgery. In order to achieve these objectives, the chapters will emphasize on theoretical advances, trends and observations in image forensics. One of the evolving challenges that are covered in the chapter is pixel based image Forensics. It includes observations on forgery detection techniques that are designed for detecting significant changes in images. Finally, the chapter is concluded including the future directions in image forensics.

Neeru Jindal, Kulbir Singh
Chapter 34. Resource Allocation in Co-Operative Relay Networks for IOT Driven Broadband Multimedia Services

The cellular Internet of Things (IOT) in its first release which was published in 3GPP Rel.2 has features like longer battery life, low cost of devices, provides additional coverage enhancements. The cellular IOT has additional features of great deal of flexibility like downlink messages, software up gradation in the move and transmission of bigger data. In this context increasing capacity with finite power requirement becomes desirable for achieving the standard system performance, in broadband multimedia services. In this chapter, we introduce a novel approach for optimal resource allocation for Multiple-Input-Multiple-Output (MIMO) system deployed with relay nodes (RNs) for users residing at cell edges. In the proposed model Equal Transmit Power allocation when Channel State Information (CSI) is not known and Adaptive transmit power when CSI is known at transmitter has been used. The resource allocation problem considers the maximization of entropy on the direct link in order to maximize the information rate and hence capacity. The main objective is to allocate the resources to the users optimally for better Quality of services on both access and relay link. The KKT condition has been used for solving classical convex optimization problem on both the links. The optimal values derived prove that proposed allocation result in water filling phenomenon for capacity improvement on both relay and access link.

Javaid A. Sheikh, Mehboob-ul- Amin, Shabir A. Parah, G. Mohiuddin Bhat
Chapter 35. Supercomputing with an Efficient Task Scheduler as an Infrastructure for Big Multimedia Processing

Today, we are observing an intense utilization of computationally high-performance environments such as multiprocessor supercomputing systems for different scientific, economic, engineering, industry, and military purposes. One of the most demanding areas is indeed big data processing that needs a huge amount of computational capacity, and multimedia is responsible for more than 80% of the big data all over the world. Another recent and severe application is in conjunction with the development of deep learning and deep neural networks, the predominant technology to analyze multimedia content, where hundreds to thousands collaborative neural layers consume billions of operations, and cannot be operational unless the efficient and optimized computing environments can be provided. In this paper, an enhanced version of Cuckoo Optimization Algorithm (COA), named E-COA, is proposed to cope with static task-scheduling problem in multiprocessor supercomputing environments for processing big volume of multimedia data. E-COA is equipped with an adaptive and efficient non-stochastic egg-laying strategy that significantly improves the local and global search potentiality of the basic COA. Experiments on a comprehensive set of randomly-generated task-graphs with different structural parameters reveal the efficiency of the proposed approach from the performance point of view, especially for the small-scale samples, and where the number of processors in the machine is very restricted i.e. we are in the lack of computational resources.

Hamid Reza Boveiri
Chapter 36. IoT for Healthcare: System Architectures, Predictive Analytics and Future Challenges

The latest advancements in the field of IoT for e-Health care is explored by presenting innovative and efficient solutions from areas like smart sensing technologies, efficient communication mechanisms and IoT infrastructures, secured data storage systems, aspects of cloud and edge computing, intelligent data management, artificial intelligence, and ambient assisted living, just to name a few. Further, to illustrate the aspects of applying IoT in healthcare by experimental results, to analyze the effects over lifestyle and healthcare systems and the response to technology-assisted medical care and treatments. Moreover, to nurture and guide the future direction of research and advancements in IoT and healthcare, taking care of the issues and challenges in this field.

Ghanshyam Singh
Chapter 37. Internet-of-Things with Blockchain Technology: State-of-the Art and Potential Challenges

The paradigm of Internet-of-Things (IoT) is flooring the way of daily-life activities which are interconnected as well as interact with their environment in order to collect information and automate certain tasks. Therefore, the seamless authentication, data privacy, security, robustness against attacks, easy deployment, and self-maintenance among other things are the potential parameters of the systems which can be fetched by blockchain technology. This chapter systematically presents the adaption of blockchain technology to fulfil the specific demands of IoT in order to develop Blockchain-integrated IoT (BIoT) for medical applications/e-healthcare. With the discussion of basics of blockchain technology, the most relevant BIoT applications such as e-healthcare has been explored and its impact over traditional cloud-centred IoT applications is presented. Moreover, the present potential challenges and possible optimizations are detailed regarding several aspects that affect the design, development, and deployment of a BIoT application. Finally, the potential recommendations are numbered to guide the researchers/developers on the issues which will have to be commenced before deploying the next generation of BIoT applications.

Ghanshyam Singh
Backmatter
Metadata
Title
Handbook of Multimedia Information Security: Techniques and Applications
Editors
Amit Kumar Singh
Dr. Anand Mohan
Copyright Year
2019
Electronic ISBN
978-3-030-15887-3
Print ISBN
978-3-030-15886-6
DOI
https://doi.org/10.1007/978-3-030-15887-3

Premium Partner