Skip to main content

2019 | Buch

Smart Innovations in Communication and Computational Sciences

Proceedings of ICSICCS 2017, Volume 2

herausgegeben von: Dr. Bijaya Ketan Panigrahi, Dr. Munesh C. Trivedi, Dr. Krishn K. Mishra, Prof. Shailesh Tiwari, Dr. Pradeep Kumar Singh

Verlag: Springer Singapore

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

The book provides insights into International Conference on Smart Innovations in Communications and Computational Sciences (ICSICCS 2017) held at North West Group of Institutions, Punjab, India. It presents new advances and research results in the fields of computer and communication written by leading researchers, engineers and scientists in the domain of interest from around the world. The book includes research work in all the areas of smart innovation, systems and technologies, embedded knowledge and intelligence, innovation and sustainability, advance computing, networking and informatics. It also focuses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduce a need for a synergy of disciplines from science and technology. Sample areas include, but are not limited to smart hardware, software design, smart computing technologies, intelligent communications and networking, web and informatics and computational sciences.

Inhaltsverzeichnis

Frontmatter

Smart Computing Technologies

Frontmatter
Classification of the Shoulder Movements for Intelligent Frozen Shoulder Rehabilitation

Frozen shoulder is a medical condition leading to stiffness in the shoulder joint and also restricting the range of motion of the shoulder joint. The paper compiles the details about the four basic movements of the shoulder joint, namely the flexion/extension, abduction/adduction, internal rotation and external rotation movements. Shoulder movements of 150 subjects were recorded, and the data was further analyzed and classified using the K-nearest neighbor algorithm, support vector machine, and also using logistic regression algorithm. The data is recorded using a module consisting of a triaxial accelerometer, a HC-05 Bluetooth module and triaxial gyroscope. SVM shows an accuracy of approximately 99.99% over the classification of the four shoulder movements and is proved to be better than other classifiers. Classification of the shoulder movements can be further used to classify an individual as either a patient suffering from frozen shoulder or a normal individual.

Shweta, Padmavati Khandnor, Neelesh Kumar, Ratan Das
Markov Feature Extraction Using Enhanced Threshold Method for Image Splicing Forgery Detection

Use of sophisticated image editing tools and computer graphics makes easy to edit, transform, or eliminate the significant features of an image without leaving any prominent proof of tampering. One of the most commonly used tampering techniques is image splicing. In image splicing, a portion of image is cut and paste it on the same image or different image to generate a new tampered image, which is hardly noticeable by naked eyes. In the proposed method, enhanced Markov model is applied in the block discrete cosine transform (BDCT) domain as well as in discrete Meyer wavelet transform (DMWT) domain. To classify the spliced image from an authentic image, the cross-domain features play the role of final discriminative features for support vector machine (SVM) classifier. The performance of the proposed method through experiments is estimated on the publicly available dataset (Columbia dataset) for image splicing. The experimental results show that the proposed method performs better than some of the existing state of the art.

Avinash Kumar, Choudhary Shyam Prakash, Sushila Maheshkar, Vikas Maheshkar
An Adaptive Algorithm for User-Oriented Software Engineering

Efficient resource allocation process is a necessity in varied environments such as software project management, operating system, construction models for reducing the risk of failure by utilizing only those many resources which are required. This paper presents an algorithm which uses fuzzy methodology of soft computing with the concept of dynamic graph theory for generating the graphs which helps in allocating the resources efficiently. With proper observation of requisites, this algorithm presents the importance of formulating a model which could be invoked at the time of chaos or failure. Due to the chaotic behavior of the software engineering environment, the resource allocation graph continuously evolves after its initial design, which is a unique factor of dynamicity signified by the algorithm. This dynamicity contains a reasoning perspective that can be validated by appending more information, and it is eliminated through the inference mechanism of fuzzy logic. The calculative process is effective and has ability to change according to the environment; therefore, it is much more effective in reducing the failures, answering the allocation of resources and specifying the work initiated by using those resources. The development of the algorithm is specifically focused on product and will be done with respect to the perspective of the developer during development process accompanied by the views of the customer regarding the needed functionalities.

Anisha, Gurpreet Singh Saini, Vivek Kumar
A Systematic Review on Scheduling Public Transport Using IoT as Tool

Public transport can play an important role in reducing usage of private vehicles by individuals which can, in turn, reduce traffic congestion, pollution, and usage of fossil fuel. But, for that public transport needs to be reliable. People should not have to wait for the bus for a long time without having any idea when the bus will come. Further, people should get a seat in the bus. To ensure this, efficiently and accurately scheduling and provisioning of buses is of paramount importance. In fact, nowadays buses are scheduled as per the need. But these scheduling is being done manually in India. Our survey shows that there are many algorithms proposed in the literature for scheduling and provisioning of buses. There is a need to tailor these algorithms for Indian scenario. We present a brief overview of these algorithms in this paper. We also identify open issues which need to be addressed.

Dharti Patel, Zunnun Narmawala, Sudeep Tanwar, Pradeep Kumar Singh
Blood Vessel Detection in Fundus Images Using Frangi Filter Technique

Blood vessels are an important factor in identification of various retinal vascular defects such as hypertensive retinopathy, retinal vein occlusion, central retinal artery occlusion, diabetic retinopathy. In this paper, we will develop an algorithm that makes use of Frangi filter technique to detect and segment blood vessels in fundus images for further diagnosis. We make use of database from Friedrich-Alexander-Universitat that comprises of fundus having healthy images, diabetic retinopathy images, glaucoma images. We also make use of the DIARETDB which comprises of 89 other fundus images and also 400 images from the STARE project. The algorithm makes use of Frangi filter technique which would produce more accurate results.

Adityan Jothi, Shrinivas Jayaram
Headline and Column Segmentation in Printed Gurumukhi Script Newspapers

Newspapers are vital source of information, and it is very much necessary to store newspapers in digital form. To search information from digital newspapers, text should be in computer processable form. To convert any newspaper into computer processable form, first step is to detect headline and segment the headline from body text. Next step is to segment columns if multiple columns are present in any article. In this paper, we have proposed a solution to segment headline from the body text and body text into columns. Experiments are carried out on Gurumukhi script newspaper article images.

Rupinder Pal Kaur, Manish Kumar Jindal
An Innovative Technique Toward the Recognition of Carcinoma Using Classification and Regression Technique

Usually, the nature of a human body is that the cells start to develop, grow, live for some time, and die after a certain period of time. This phenomenon proceeds until the life span. But instead of this normal phenomenon, if the cells grow abundantly, then they end in carcinoma. Cancer cells destroy the entire functioning of the body. If cancer is found in the preliminary stage, then the extent of life span has been guaranteed to some extent. Existing categorization techniques do not provide exact classification results. Here, this work used the standard leukemia dataset for categorizing the cancer cells and non-cancer cells. Missed values are replaced through enhanced independent component analysis method. Further best features are selected through geometric particle swarm optimization. As a last step, cancer cells are categorized through classification and regression technique.

M. Sangeetha, N. K. Karthikeyan, P. Tamijeselvy
Model Order Reduction Using Fuzzy C-Means Clustering and Particle Swarm Optimization

The hybrid method which combines the evolutionary programming technique, i.e., based on the swarm optimization algorithm and fuzzy c-means clustering method is used for reducing the model order of high-order linear time-invariant systems in the presented work. The process of clustering is used for finding the group of objects with similar nature that can be differentiated from the other dissimilar objects. The reduction of the numerator of original high-order model is done using the particle swarm optimization algorithm, and fuzzy c-means clustering technique is used for reducing the denominator of the higher-order model. The stability of the model is also verified using the pole zero stability analysis, and it was found that the obtained reduced-order model is stable. Further, the transient and steady state response of the obtained lower-order model as compared to the other existing techniques are better. The output of the obtained lower-order model is also compared with the other existing techniques in the literature in terms of ISE, ITSE, IAE, and ITAE.

Nitin Singh, Niraj K. Choudhary, Rudar K. Gautam, Shailesh Tiwari
Control Chart Pattern Recognition Based on Convolution Neural Network

Quality control chart pattern recognition plays an extremely important role in controlling the products quality. By means of real-time monitoring control, the abnormal status of the product during production can be timely observed. A method of control pattern recognition based on convolution neural network is proposed. Firstly, the control chart patterns (CCPs) are analyzed, the statistical characteristics and shape features of the control charts are considered, and the appropriate characteristics to distinguish the different abnormal patterns are selected; secondly, deep learning convolution neural network is trained and learned; finally, the feasibility and effectiveness of the control chart pattern recognition are verified through Monte Carlo simulation.

Zhihong Miao, Mingshun Yang
Solution to IPPS Problem Under the Condition of Uncertain Delivery Time

Aiming at uncertain delivery time problems of process planning and job shop scheduling integration (integrated process planning and scheduling, IPPS), fuzzy number is introduced to denote the workpiece delivery time. And maximizing workpiece delivery satisfaction weighted and minimizing the maximum completion time are taken as the optimization objective to establish the mathematical model. Genetic algorithm is used to search the optimal scheduling to meet the target function. Finally, an example verifies the effectiveness and feasibility of the model and algorithm.

Jing Ma, Yan Li
A Model for Computing User’s Preference Based on EP Algorithm

In this paper, we address the problem of identifying target user through the model of computing user preference for a certain item or service. The model we present works for a specific domain through online behavior analysis which considers user’s attentiveness of the entire area and the specific item combination style combining features of the specific industry. The model is evaluated by predicting users’ behavior and advertising click-through rate in the real application environment. The results show that this model is successful in precision recommendation, especially for the dynamic data analysis.

Shan Jiang, Zongwei Luo, Zhiyun Huang, Jinqun Liu
3D Face Recognition Method Based on Deep Convolutional Neural Network

In 2D face recognition, result may suffer from the impact of varying pose, expression, and illumination conditions. However, 3D face recognition utilizes depth information to enhance systematic robustness. Thus, an improved deep convolutional neural network (DCNN) combined with softmax classifier to identify face is trained. First, the preprocessing of color image and depth map is different in removing redundant information. Then, the feature extraction networks for 2D face image and depth map are, respectively, build with the principle of recognition rate maximization, and parameters about neural networks reset by a series of tests, in order to acquire higher recognition rate. At last, the fusion of two feature layers is the final input of artificial neural network (ANN) recognition system, which is followed by a 64-way softmax output. Experimental results demonstrate that it is effective in improving recognition rate.

Jianying Feng, Qian Guo, Yudong Guan, Mengdie Wu, Xingrui Zhang, Chunli Ti
Color-Guided Restoration and Local Adjustment of Multi-resolution Depth Map

The depth map obtained by Microsoft Kinect is often accompanied with a large area information loss called black hole. In this paper, an optimal algorithm for image restoration with multi-resolution anisotropic diffusion is proposed to fill the black areas. At the same times, the restriction of anisotropic diffusion algorithm can be broken through by using multi-resolution. Using the color map as a guidance to refine the details of the image. Then according to local adjustment, the error rate can be effectively reduced. At the end of this paper, the joint bilateral filter (JBF) is introduced as a contrast, and the PCL open source point cloud database is used for 3D reconstruction. Compared with competing methods, the proposed algorithm can fill the larger black holes significantly. The experiments show that it also has good performance on reducing the error rate and preserving the edge details.

Xingrui Zhang, Qian Guo, Yudong Guan, Jianying Feng, Chunli Ti
A Stacked Denoising Autoencoder Based on Supervised Pre-training

Deep learning has attracted much attention because of its ability to extract complex features automatically. Unsupervised pre-training plays an important role in the process of deep learning, but the monitoring information provided by the sample of labeling is still very important for feature extraction. When the regression forecasting problem with a small amount of data is processed, the advantage of unsupervised learning is not obvious. In this paper, the pre-training phase of the stacked denoising autoencoder was changed from unsupervised learning to supervised learning, which can improve the accuracy of the small sample prediction problem. Through experiments on UCI regression datasets, the results show that the improved stacked denoising autoencoder is better than the traditional stacked denoising autoencoder.

Xiumei Wang, Shaomin Mu, Aiju Shi, Zhongqi Lin

Web and Informatics

Frontmatter
When Things Become Friends: A Semantic Perspective on the Social Internet of Things

The Internet of Things (IoT) has become an integral component of modern day computing. It is evident that majority of applications will be associated with number of objects (things) in the coming future, whose sole purpose would be to provide required services to users. The current IoT enactment is retracting from quintessential smart objects (things) to smart as well as social objects (things). This unprecedented paradigm shift is certain to usher a remarkable furtherance in the epoch of modern day computing. This paper divulges the pristine phenomena of Social Internet of Things (SIoT) along with uncovering its various aspects and defining a generalized architecture for the same. It also contributes by providing a semantic description of its various components in terms of ontological structure. Finally, the paper concludes by proposing research directions in the context of SIoT accompanied by envisaging its future scope.

Nancy Gulati, Pankaj Deep Kaur
A Context-Aware Recommender Engine for Smart Kitchen

Internet of Things (IoT) is the next wave of technological innovation that binds significant number of things (objects) together and produces considerable amount of services which people may use and subscribe for their convenience. As large numbers of objects are interconnected in IoT, collecting information related to the users and their preferences is challenging task. Thus, recommending services to the users based on the objects which are available with them is indispensable for the success of IoT. This paper proposes a context-aware recommender engine for suggesting possible recipes with available food items and time required to prepare the meal in the smart kitchen. It not only considers user’s context but also other environmental context like weather, time, and energy consumption by appliances. The paper concludes with possible future research directions in the area under discussion.

Pratibha, Pankaj Deep Kaur
Analysis of Hypertext Transfer Protocol and Its Variants

With massive amounts of information being communicated and served over the Internet these days, it becomes crucial to provide fast, effective, and secure means to transport and save data. The previous versions of the Hyper Text Transfer Protocol (HTTP/1.0 and HTTP/1.1) possess some subtle as well as several conspicuous security and performance issues. They open doors for attackers to execute various malicious activities [1]. The final version of its successor, HTTP/2.0, was released in 2015 to improve upon these weaknesses of the previous versions of HTTP. This paper discusses the issues present in HTTP/1.1 by simulating attacks on the vulnerabilities of the protocol and tests the improvements provided by HTTPS and HTTP/2.0. A performance and security analysis of myriad of commonly used Websites has been done. Some of the measures that a Website must take to provide excellent performance and utmost security to its users have also been proposed in this paper.

Aakanksha, Bhawna Jain, Dinika Saxena, Disha Sahni, Pooja Sharma
Spam Detection Using Rating and Review Processing Method

In recent times, e-commerce sites have become an essential part of people lifestyle. Viewers give feedback and firsthand account of the online products, and these reviews thus play an important role in decision making of the other buyers. So, in order to increase or decrease sales of products, spam reviews are generated by the companies. Hence, there is a need to detect and filter the spam reviews to provide customers genuine reviews of the product. In this paper, a review processing method is proposed. Some parameters have been suggested to find the usefulness of reviews. These parameters show the variation of a particular review from other, thus increasing the probability of it being spam. This method introduced classifies the review as helpful or non-helpful depending on the score assigned to the review.

Ridhima Ghai, Sakshum Kumar, Avinash Chandra Pandey
DDITA: A Naive Security Model for IoT Resource Security

Information security has its own importance in information era. It forms the third pillar of information world after the performance upsurge and power issues. Security, as the term suggests, is the state of being free from threats. Resultantly Internet of Things receives almost all of the existing security threats from the world of Internet, along with some newly generated threats. In this paper, we are essentially and largely focussing on the security of data as well as resources involved in an Internet of Things system. In this paper, we propose a naive security model, namely DDITA (Definition, Design, Implementation, Testing and Amendment) that emphasizes on security policies, their implementation, their testing under various strategies and finally the amendments if required. In this paper, we have focussed on data involved in Internet of Things. We have classified data as private data and public data. We have also extended our studies toward the further classification of private data into Stored Data and Data in Transit. The security of Stored Data is proposed keeping encryption, authorization, authentication, attestation, and encryption using TPM under its umbrella.

Priya Matta, Bhaskar Pant, Umesh Kumar Tiwari
IO-UM: An Improved Ontology-Based User Model for the Internet Finance

Building user model which can accurately reflect the user’s preferences is necessary and important in personalized service systems. For financial platforms, user’s focuses are regulated not only by his interests but also the environmental conditions. Considering the user’s investment state and operation behaviors, we build a decay function for user modeling in the Internet financial area. In order to meet the requirements of the timeliness and accuracy, this paper presents an improved ontology-based approach to build user model (IO-UM) considering decay function, which constructs the domain ontology by text mining, and update user model to capture recently focuses by ontology learning. Experiments were taken to illustrate the usefulness of the IO-UM to provide personalization services. To prove the influence of decay function, we took different values for comparison in the experiments.

Xinchen Shi, Zongwei Luo, Bin Li, Yu Yang

Smart Hardware and Software Design

Frontmatter
A Low-Voltage Distinctive Source-Based Sense Amplifier for Memory Circuits Using FinFETs

SRAM is the key element used in digital circuits. It is also used as a cache memory in computers, automatic modern equipment like mobile phones, modern appliances, digital calculators, digital cameras, so that the requirements of high-speed advanced memory or embedded memory lead to the development of low-voltage SRAMs. The paper has introduced the design and simulation of the distinctive source-based sense amplifier which is a peripheral circuit for static random access memory (SRAM) that has to amplify the data which is present on the bit lines during the read operation. Simulation of the proposed design has been implemented using 20 nm FinFET technology on the Cadence Virtuoso Tool with a supply voltage of +0.4 V. The main advantages from the proposed design circuit are less power consumption and display of minimum sense delay in sensing the data from SRAM when compared to the existing design circuit. A comparison is drawn between the proposed circuit and the existing design circuit.

Arti Ahir, Jitendra Kumar Saini, Avireni Srinivasulu
Design of QCA-Based D Flip Flop and Memory Cell Using Rotated Majority Gate

Quantum-dot cellular automata (QCA) are one of the promising technologies that enable nanoscale circuit design with high-performance and low-power consumption features. This work presents a rotated structure of conventional 3-input majority gate in QCA, which exhibits a symmetric structure that is suitable for a compact implementation of coplanar QCA digital circuits. To show the novelty of this structure, D flip flops and memory cell are proposed. The result shows proposed D flip flops are more superior over the existing designs. In addition, proposed memory cell is 33, 79, and 20% more effective in terms of cell counts, area, and latency, respectively, over the best design in this segment using conventional 3-input majority gate. Designs are realized and evaluated using QCADesigner 2.0.3.

Trailokya Nath Sasamal, Ashutosh Kumar Singh, Umesh Ghanekar
On a Hardware Selection Model for Analysis of VoIP-Based Real-Time Applications

Voice over IP (VoIP) is a methodology for transmitting data, voice, video, messaging, and chat services over Internet Protocol. The quality of VoIP is heavily dependent on the type of hardware used for call manager. Hardware calibration is a mechanism for selecting a suitable hardware (processor) which can match the processing requirements of call manager. The aim of this paper is to propose a hardware selection model that can handle VoIP-based real-time applications by considering parameters like CPU utilization and RAM utilization. The proposed algorithm takes into consideration input arrival rate, type of applications (peer-to-peer or back-to-back), codec, and call holding time (CHT) corresponding to which a hardware recommendation is generated. The experiments are conducted for different types of hardware, and results show that the proposed model can help VoIP service providers to select the appropriate hardware.

Shubhani Aggarwal, Gurjot Kaur, Jasleen Kaur, Nitish Mahajan, Naresh Kumar, Makhan Singh
Trajectory Planning and Gait Analysis for the Dynamic Stability of a Quadruped Robot

Trajectory planning of robot’s center of gravity (CoG) is the main concern when a legged robot is walking. The trajectory of the robot should be framed such that the center of pressure (CoP) of the robot should lie within supporting polygon at all time. This paper deals with the study of support polygon and graphical analysis to find the location of CoG, where the robot has high chances to go to instability. The quadrilateral supporting phases are utilized to avoid these instability locations. Further, the analysis is done to find the timely sequence of lift and touchdown of legs (lift and touch are called as events of legs). Based on the sequence of events and the support polygon analysis, trajectory of the robot is defined, which can produce smooth, steady, and stable robot motion. Though the robot gains static stability by trajectory planning, its dynamic stability should also be verified. This is done using zero moment point (ZMP) method. The analysis done in this paper is for the unswaying robot, walking on flat terrain.

Mayuresh S. Maradkar, P. V. Manivannan
Application of Evolutionary Reinforcement Learning (ERL) Approach in Control Domain: A Review

Evolutionary algorithms have come to take a centre stage in diverse areas spanning multiple applications. Reinforcement learning is a novel paradigm that has recently evolved as a major control technique. This paper presents a concise review on implementing reinforcement learning with evolutionary algorithms, e.g. genetic algorithm (GA), particle swarm optimization (PSO), ant colony optimization (ACO), to several benchmark control problems, e.g. inverted pendulum, cart–pole problem, mobile robots. Some techniques have combined Q-Learning with evolutionary approaches to improve their performance. Others have used knowledge acquisition to obtain optimal fuzzy rule set and genetic reinforcement learning (GRL) for designing consequent parts of fuzzy systems. We also propose a Q-value-based GRL for fuzzy controller (QGRF) where evolution is performed after each trial in contrast to GA where many trials are required to be performed before evolution.

Parul Goyal, Hasmat Malik, Rajneesh Sharma
Fingerprint-Based Support Vector Machine for Indoor Positioning System

The position of a movable object is required in an indoor environment for providing various business interest services and for emergency services. The techniques implemented on WLAN (802.11b Wireless LANs) endow with more ubiquitous (Feng et al. in IEEE Trans Mob Comput 12(12), 2012, [1]) within the environment and the requirement for additional hardware is not necessary, thereby reducing infrastructure cost and enhancing the value of wireless data network. The received signal strength (RSS) from various reference points (RP) were recorded by a tool and fingerprint radio map is constructed. The signal property of a fingerprint will differ in each point. The location can be found by comparing the current signal strength with already collected radio maps. Almost all indoor environments are equipped with Wi-Fi devices. No additional hardware is required for the setup. In this paper, we introduce SVM classifier (Roos et al. in IEEE Trans Mob Comput 1(1), 59–69, 2002 [2]) as a methodology with minimum cost and without scarifying accuracy. The obtained results show minimal location error and accurate location of the object.

A. Christy Jeba Malar, Govardhan Kousalya
GPU Approach for Handwritten Devanagari Document Binarization

The optical character recognition (OCR) is the process of converting scanned images of machine printed or handwritten text, numerals, letters, and symbols into a computer processable format such as ASCII. For creating OCR’s paperless application, a system of high speed and of better accuracy is required. Parallelization of algorithm using graphics processing unit (GPU) along with CPU can be used to speed up the processing. In GPU computing, the compute-intensive operations are performed on GPU while serial code still runs on CPU. Binarization is one of the most fundamental preprocessing techniques in the area of image processing and pattern recognition. This paper proposes an adaptive threshold binarization algorithm for GPU. The aim of this research work is to speed up binarization process that eventually will help to accelerate the processing of document recognition. The algorithm implementation is done using Compute Unified Device Architecture (CUDA) software interface by NVIDIA. An average speedup of 2× is achieved on GPU GeForce 210 having 16 CUDA cores and 1.2 compute level, over the serial implementation.

Sandhya Arora, Sunita Jahirabadkar, Anagha Kulkarni
On an Improved K-Best Algorithm with High Performance and Low Complexity for MIMO Systems

Multiple-input multiple-output (MIMO) techniques are significantly advanced in contemporary high-rate wireless communications. The computational complexity and bit-error-rate (BER) performance are main issues in MIMO systems. An algorithm is proposed based on a traditional K-Best algorithm, coupling with the fast QR decomposition algorithm with an optimal detection order for the channel decomposition, the Schnorr–Euchner strategy for solving the zero floating-point and sorting all the branches’ partial Euclidean distance, the sphere decoding algorithm for reducing the search space. The improved K-Best algorithm proposed in this paper has the following characteristics: (i) The searching space for the closest point to a region is smaller compared to that of the traditional K-Best algorithm in each dimension; (ii) it can eliminate the survival candidates at early stages, and (iii) it obtains better performance in the BER and the computational complexity.

Jia-lin Yang
Dynamic Testing for RFID Based on Photoelectric Sensing in Internet of Vehicles

With the development of application scale of radio frequency identification (RFID) technology in Internet of vehicles (IOV), the scenarios of multi-reader and multi-tag have become increasingly common. Firstly, the relationship between IOV and RFID technology is presented in this paper. Moreover, key of electronic vehicle identification (EVI) is presented, which is the main application of RFID in IOV. Finally, the novel semi-physical verification system based on photoelectric sensing is introduced for dynamic testing of RFID in EVI. The experimental results show that the testing method as well as system is applicable not only for reading range test of single-tag scenario, but also for anti-collision test of multi-tag scenario. The proposed method has good characteristics in terms of speed and accuracy. Thus, the research of this paper is very useful for testing of EVI RFID system performance in a variety of harsh environments.

Xiaolei Yu, Dongsheng Lu, Donghua Wang, Zhenlu Liu, Zhimin Zhao
Forest Fire Visual Tracking with Mean Shift Method and Gaussian Mixture Model

Forest fire region from surveillance video is non-rigid object with varying size and shape. It is complex and thus difficult to be tracked automatically. In this paper, a new tracking algorithm is proposed by mean shift method and Gaussian mixture model. Based on moment features of mean shift algorithm, the size adaptive tracking window is employed to reflect size and shape changes of object in real time. Meanwhile, the Gaussian mixture model is utilized to obtain the probability value of belonging to fire region for each pixel. Then, the probability value is used to update the weighting parameter of each pixel in mean shift algorithm, which reduces the weight of non-fire pixels and increases the weight of fire pixels. With these techniques, the mean shift algorithm can converge to forest fire region faster and accurately. The presented algorithm has been tested on real monitoring video clips, and the experimental results prove the efficiency of our new method.

Bo Cai, Lu Xiong, Jianhui Zhao
A Novel Construction of Correlation-Based Image CAPTCHA with Random Walk

CAPTCHA has been widely adopted throughout the World Wide Web to achieve network security by preventing malicious interruption or abuse of server resources. Existing text-based and image-based CAPTCHA techniques are not robust enough to resist sophisticated attacks using pattern recognition and machine learning. To overcome this challenge, we designed a new approach to construct an image-based CAPTCHA by using a random walk on image with correlated contents, which capitalizes on human knowledge on the relevance of images. The usability and robustness of the proposed scheme have been evaluated by both numerical analysis and empirical evidence. Early testing has shown it to be a promising approach to enhancing and replacing the existing Web CAPTCHA techniques when fighting against bots.

Qian-qian Wu, Jian-jun Lang, Song-jie Wei, Mi-lin Ren, Erik Seidel
Matching Algorithm and Parallax Extraction Based on Binocular Stereo Vision

By using binocular stereoscopic vision and planar images, this paper details the process of obtaining 3D information for interested objects and obtains the world and pixel coordinates of any point on the object. The main contents of this article are focus on camera calibration, image correction, stereo matching, and parallax extraction. Furthermore, various algorithms and implementation methods are studied and analyzed. Finally, by comparing correction and stereo matching algorithms, more effective correction algorithm and matching algorithm are achieved.

Gang Li, Hansheng Song, Chan Li
Wild Flame Detection Using Weight Adaptive Particle Filter from Monocular Video

Wild flame detection from monocular video is an important step for monitoring of fire disaster. Flame region is complex and keeps varying, thus difficult to be tracked automatically. A weight adaptive particle filter algorithm is proposed in this paper to obtain flame detection with higher accuracy. The particle filter method considers color feature model, edge feature model, and texture feature model and then fuses them into a multi-feature model. During which related adaptive weighting parameters are defined and used for the features. For each particle corresponding to target region being tracked, the proportion of fire pixels in the area is computed with Gaussian mixture model, and then it is used as an additional adaptive parameter for the related particle. The presented algorithm has been tested with real video clips, and experimental results have proved the efficiency of the novel detection method.

Bo Cai, Lu Xiong, Jianhui Zhao
Backmatter
Metadaten
Titel
Smart Innovations in Communication and Computational Sciences
herausgegeben von
Dr. Bijaya Ketan Panigrahi
Dr. Munesh C. Trivedi
Dr. Krishn K. Mishra
Prof. Shailesh Tiwari
Dr. Pradeep Kumar Singh
Copyright-Jahr
2019
Verlag
Springer Singapore
Electronic ISBN
978-981-10-8971-8
Print ISBN
978-981-10-8970-1
DOI
https://doi.org/10.1007/978-981-10-8971-8