Skip to main content

2021 | Buch

Advances in Machine Learning and Computational Intelligence

Proceedings of ICMLCI 2019

herausgegeben von: Prof. Srikanta Patnaik, Prof. Xin-She Yang, Prof. Ishwar K. Sethi

Verlag: Springer Singapore

Buchreihe : Algorithms for Intelligent Systems

insite
SUCHEN

Über dieses Buch

This book gathers selected high-quality papers presented at the International Conference on Machine Learning and Computational Intelligence (ICMLCI-2019), jointly organized by Kunming University of Science and Technology and the Interscience Research Network, Bhubaneswar, India, from April 6 to 7, 2019. Addressing virtually all aspects of intelligent systems, soft computing and machine learning, the topics covered include: prediction; data mining; information retrieval; game playing; robotics; learning methods; pattern visualization; automated knowledge acquisition; fuzzy, stochastic and probabilistic computing; neural computing; big data; social networks and applications of soft computing in various areas.

Inhaltsverzeichnis

Frontmatter

Modeling and Optimization

Frontmatter
Intrusion Detection Using a Hybrid Sequential Model

A large amount of work has been done on the KDD 99 dataset, most of which include the use of a hybrid anomaly and misuse detection model done in parallel with each other. In order to further classify the intrusions, our approach to network intrusion detection includes the use of two different anomaly detection models followed by misuse detection applied to the combined output obtained from the previous step. The end goal of this is to verify the anomalies detected by the anomaly detection algorithm and clarify whether they are actually intrusions or random outliers from the trained normal (and thus to try and reduce the number of false positives). We aim to detect a pattern in this novel intrusion technique itself, and not the handling of such intrusions. The intrusions were detected to a very high degree of accuracy.

Abhishek Sinha, Aditya Pandey, P. S. Aishwarya
Simulation and Analysis of the PV Arrays Connected to Buck–Boost Converters Using MPPT Technique by Implementing Incremental Conductance Algorithm and Integral Controller

Photovoltaic solar panels generate direct electricity from solar energy which involves physical and chemical changes. The current–voltage rating and the performance of the photovoltaic(PV) array modules considerably change with respect to lighting, temperature and the change of load conditions, time of day, amount of solar insolation, direction, orientation of modules, shading, and geographic location. These modules at one specific voltage, current, or wattage operate efficiently. This variation in output power will lead to loss of power efficiency and poor performance of the system. This paper implements a maximum power point tracking (MPPT) technique using buck–boost converters for PV arrays to increase the output power efficiency in real time. An incremental conductance algorithm is employed to implement the MPPT technique which makes the system more accurate and simpler to implement. Integral controllers are also used to make the maximum power tracking less error-prone. Simulations of this system are done using MATLAB and SIMULINK.

D. S. Sumedha, R. Shreyas, Juthik B. V., Melisa Miranda
A New Congestion Control Algorithm for SCTP

Stream control transmission protocol (SCTP) is a transport layer protocol which inherits features of TCP and UDP. SCTP congestion control works in the same way as the other congestion control algorithms. In a wired environment, the primary cause of packet drop is congestion, but in the wireless environment, transmission impairments of the channel can also lead to packet drops. The current congestion control module of SCTP cannot discriminate between congestion loss and transmission loss. Hence, in both cases, the sender will reduce the traffic rate. This paper suggests a new congestion control algorithm for SCTP, which eliminates the unnecessary speed throttling due to transmission loss. Our idea is to modify the bandwidth estimation using SCTP SACK. We perform extensive simulations to show that the proposed SCTP congestion control algorithm outperforms the basic SCTP congestion control algorithm.

S Syam Kumar, T. A. Sumesh
RGNet: The Novel Framework to Model Linked ResearchGate Information into Network Using Hierarchical Data Rendering

In recent years, the usage of academic social network sites (ASNS) has been increased extensively for research-oriented activities. The accessibility, flexibility and availability of digitized information present on diverse ASNS platforms offer researchers the possibilities to boost their visibility to mark an impact in global research community. Amidst other ASNS, ResearchGate (RG) is prevalently exploited. Information shared on RG is categorized into user demographics and user associations. This structured organization of information makes RG comparable to social media platforms such as Twitter, Facebook and LinkedIn. Such social media platforms are widely examined as connected networks in various applications. To model RG information into network, this research proposes a novel hierarchical data rendering process to collect the linked RG information. Correspondingly, this research proposes a novel framework, RGNet, to model RG information into network including user demographics and user associations, implementing the proposed hierarchical data rendering process. The achieved outcomes reveal that the linked RG information can be precisely represented, explored and analysed leveraging the concepts of network modelling and network theory analogous to various impactful applications of social network analysis. Further, the technological advancements provide various competent mechanisms for network storage and network information retrieval for constructed RG network. The graphical visualization of the constructed RG network is also presented in this research.

Mitali Desai, Rupa G. Mehta, Dipti P. Rana
A New Approach for Momentum Particle Swarm Optimization

In this paper, a new approach to momentum particle swarm optimization is proposed. We design a particle swarm optimizer that converges faster than the currently available momentum particle swarm optimizers. This simulates gradient descent with momentum and is inspired from the back-propagation algorithm with momentum in neural networks. The proposed optimizing algorithm constantly gives faster convergence time in all the available test optimization functions (constrained or unconstrained). This paper contains graphs and results that summarize the algorithm’s performance in contrast with weighted particle swarm optimizer and momentum particle swarm optimizer.

Rohan Mohapatra, Rohan R. Talesara, Saloni Govil, Snehanshu Saha, Soma S. Dhavala, TSB Sudarshan
Neural Networks Modeling Based on Recent Global Optimization Techniques

This paper presents an integrated artificial neural networks (ANNs) model with different global optimization techniques. The optimization techniques studied in this paper are the genetic algorithm (GA), the grey wolf optimization (GWO), the whale optimization (WOA), the dragonfly optimization (DA) and grasshopper optimization (GOA). These five techniques are compared based on their performance and time. All the techniques are tested on the same initial population. Moreover, the same experiment design (number of trials, iterations and individuals) is used in all the techniques. Among these techniques, ANN-GA and ANN-GWO show comparable and better performance than ANN-WOA, ANN-DA and ANN-GOA. The performance of ANN-GA can be improved by increasing the number of solutions and iterations. ANN-GWO showed a competitive performance among all algorithms with moderate number of solutions and iterations. ANN-GWO converges faster to solutions, which makes it more practical for faster and automatic processing. ANN-GA shows stable and robust performance with respect to ANN-GWO.

Anwar Jarndal, Sadeque Hamdan, Sanaa Muhaureq, Maamar Bettayeb

Machine Learning Techniques

Frontmatter
Network Intrusion Detection Model Using One-Class Support Vector Machine

Network intrusion detection is the process of monitoring network traffic for abnormal behaviors and issuing alerts when such suspicious activity is discovered. This paper presents a new network intrusion detection approach that trains on normal network traffic data and searches for anomalous behaviors that deviate from the normal model. The proposed approach applies one-class support vector machine (OCSVM) algorithm to detect anomalous activities in the network traffic. The experiment was done using a dataset of real network traffic collected using the modern honey network (MHN).

Ahmed M. Mahfouz, Abdullah Abuhussein, Deepak Venugopal, Sajjan G. Shiva
Query Performance Analysis Tool for Distributed Systems

The increasing demands for fast interactive response times make query performance one of the central problems of Distributed Systems. A sophisticated design is necessary to ensure high performance of Distributed Systems. Software Performance Engineering (SPE) is an approach that builds and solves quantitative performance models to assess the performance of the system early in the software development lifecycle. The most common and popular approach for solving the performance models developed is to use simulation. Simulation is a powerful tool for modeling and analysis of complex Distributed Systems where obtaining a closed-form analytical solution is extremely difficult. Simulation modeling is used extensively in industry for alternative system architectures. If the performance of a Distributed System is determined to be unacceptable at the time of “acceptance testing”, it can result in very expensive redesign, delayed delivery or, in the worst case, complete non-use of the system. Accordingly, we have developed a simulation tool, Distributed System Performance Analysis Tool (DSPAT), to support Software Performance Engineering concept of estimating and analyzing the performance in the early design phases of Distributed Systems development.

Madhu Bhan, K. Rajanikanth
A Robust Multiple Moving Vehicle Tracking for Intelligent Transportation System

The proposed framework accurately counts and classifies the vehicle color of detected vehicles for a real-time video input. The proposed methodology maps the vehicle features for detecting the moving vehicles using background subtraction using KNN, a binary mask, and morphological operations such as erosion and dilation which efficiently eliminate dynamic shadow and extract the vehicle region. Further, unique Id is been assigned for detected vehicles to avoid repetitive counting of detected vehicles when it passes through the counting zone. Furthermore, the colors of the vehicles are been identified by estimating similarity measure between trained and tested using k-Nearest Neighbors machine learning classifier. The RGB intensity with a maximum frequency of occurrence is reflected as the color of vehicle. Experimental results show that the proposed system outperforms compared to other state-of-the-art methods such as mixture of Gaussian2 (MOG2) and Godbehere–Matsukawa–Goldberg (GMG) resulted in false detection due to its inefficiency in shadow removal resulting in an overall efficiency of 94% consuming less computational time for yielding the output.

N. Kavitha, D. N. Chandrappa
Bug Priority Assessment in Cross-Project Context Using Entropy-Based Measure

Software users report bugs on the bug tracking system in a distributed environment with different levels of understanding and knowledge about the software. As a result of this, software bug repository data are increasing day by day with noise and uncertainty in it. The noise and uncertainty present in bug summary need to be handled, so that it should not affect the performance of learning strategies for different bug attributes and fix time predictions. Bug prioritization is a process of deciding the sequence of bugs to be fixed. Wrong bug prioritization results in unresolved important bugs with delayed release of the software, thus affecting the quality and evolution of the software. Bug priority prediction requires historical data for the training of the classifiers. However, such historical data is not always available in practice for all the software. In such circumstances, designing prediction models with training data from other projects is the solution. This process of bug priority prediction using training and testing bug data from two different projects is called cross-project bug priority prediction. We have used Shannon entropy to measure the uncertainty in bug summary in addition to bug severity and summary weight for bug priority prediction. In this paper, we have proposed different machine learning classifiers to predict the priority of a reported bug in cross-project context by handling the uncertainty. Results show performance improvement for proposed entropy-based cross-project bug priority prediction over existing summary-based cross-project bug priority prediction for a newly coming bug report.

Meera Sharma, Madhu Kumari, V. B. Singh
Internet of Things Security Using Machine Learning

The Internet of Things (IoT) is growing rapidly in the last decade. The number of interconnected smart devices has already crossed the total world’s population. The data generated by these devices are huge. The IoT is used in different applications like smart monitoring, healthcare system, smart home, where sensitive information is shared among users. Security and privacy are some of the major challenges in IoT applications. Some traditional security and cryptographic techniques are tried to address some of the issues exist in the IoT system. Due to resource constraints, IoT devices are more vulnerable to the outside world. The processing and computing the sensitive data is one of the challenges in IoT application. In this work, the authors identify the security issue present in the IoT system. Later identified machine learning can be addressed some of the security issues exist in IoT applications. From the analysis, authors found out that the machine learning integrated with IoT makes the system more secure, and processing becomes more efficient using machine learning algorithms.

Bhabendu Kumar Mohanta, Debasish Jena
Churn Prediction and Retention in Banking, Telecom and IT Sectors Using Machine Learning Techniques

Customer churn prevention is one of the deciding factors when it comes to maximizing the revenues of any organization. Also known as customer attrition, it occurs when customers stop using the products or services of a company. Through our paper, we are predicting customer churn beforehand so that proper customer retention steps can be taken with the help of exploratory data analysis and to make customized offers for the targets. For the churn prediction, our implementation consists of comparative analysis of four algorithmic models, namely logistic regression, random forest, SVM and XGBoost, on three different domains, namely banking, telecom and IT. The purpose of doing this comparative analysis is that there are not many research works which compare the performance of various algorithms in different domains. We also develop various retention strategies with the help of exploratory data analysis.

Himani Jain, Garima Yadav, R. Manoov
Reinforcement Learning-Based Resource Allocation for Adaptive Transmission and Retransmission Scheme for URLLC in 5G

Ultra-reliable low-latency communication (URLLC) is one of the service categories envisioned by fifth-generation (5G) wireless systems for supporting mission-critical applications. These applications require the transmission of short-length data packets with a high level of reliability and hard latency bound. Achieving targeted reliability in URLLC is challenging because of dynamic channel conditions and network load. Therefore, retransmission of lost data packets is essential for increasing the reliability of data. Again, an optimized radio resource allocation for these (re)transmission schemes is required to satisfy URLLC constraints. Hence, we propose a reinforcement learning-based resource allocation for adaptive transmission and retransmission scheme. We demonstrate the use of machine learning in optimizing the resource usage under variable network load conditions.

Annapurna Pradhan, Susmita Das
Deep Learning-Based Ship Detection in Remote Sensing Imagery Using TensorFlow

Ship detection has been a very significant proposal that is recently being researched extensively due to its very wide application and help. Ship detection is the one which helps in handling several issues that endangers the ocean security based on of several grounds. Ship detection is included in the object segmentation and detection problem that requires the aid of computer vision to preprocess and process the satellite images. The whole process is treated differently by humans and computers which makes the process complex. It is because the later treats the images as number matrix. Hence, this paper tries to propose steps to solve the issue using deep learning and convolutional neural network that works toward achieving excellent accuracy as compared to the state-of-the-art method. This not only helps us detect an object in the homogenous background but also aids us immensely in object detection and processing among the heterogeneous background. This paper has achieved this using the Keras model which is a good accurate satellite image analysis tool. It can precisely detect the ships in the mid-seas. Datasets are collected from the Kaggle Ship Detection Challenge. Here we take the dataset which consists of image chips extracted from Planet satellite imagery collected over the San Francisco Bay area. This analysis would help us control the exploitation of the sea’s resources. The need of guaranteeing sustainable sea exploitation with the likes of fishing, coral extractions, artifact explorations is essential.

Atithee Apoorva, Gopal Krishna Mishra, Rashmi Ranjan Sahoo, Sourav Kumar Bhoi, Chittaranjan Mallick
Modern Approach for Loan Sanctioning in Banks Using Machine Learning

As the needs of people are increasing, the demand for loans in banks is also frequently getting higher every day. Usually, banks process the loan of any applicant after the verification and checking of its eligibility which is a tough and time-taking process. In some cases, some applicants default in payment resulting in loss of capital in banks. Machine learning approach would be an ideal solution to reduce human efforts and effective decision making in the loan approval process by implementation of machine learning tools using classification algorithms to predict the deserving applicants for loan approval. In this paper, we build a system to construct a model by training the system with records and approval results of the previously applied loan applicants. Model building is done by classification algorithms on the basis on some predictive features that categorize an outcome value as approve or disapprove. We found logistic regression model has the best performance in comparison with other models and can be used as a predictive model reducing the risk factor in selecting the deserving applicants for loan repayment saving a lot of bank efforts and assets. Further, this model can be implemented in the banking sector allowing faster processing of loans.

Golak Bihari Rath, Debasish Das, BiswaRanjan Acharya
Machine Learning for Customer Segmentation Through Bibliometric Approach

In the age of information science and automation, ‘technological revolution’ and ‘machine era’ have gained significant attention from researchers of every quarter of life. Exuberance of artificial computation with machine learning techniques with practical applications and precision has become a perennial issue in every discipline. Optimization of marketing efficiency with assistance of computational applications has improved the accuracy and transparency level to achieve competitive advantages, to enhance organizational efficiency and to gain market advancement. In this bibliometric study, the authors have analyzed the literatures published during the period 2009–2019 having occurrence of relevant terms and major sources of contribution related to the area of ‘customer segmentation’ as an impact of ‘machine learning applications.’ Data scribe has been obtained from Scopus database, total of 1440 numbers of research articles were further analyzed using VOSviewer tool. The study revealed co-occurrence of keywords and bibliometric coupling of major sources of contribution. Findings suggest that the most frequent occurred key terms related to machine Learning have more influence and link strength than the terms related to customer segmentation by analyzing significant sources in terms of citations received and number of contribution made.

Lopamudra Behera, Pragyan Nanda, Bhagyashree Mohanta, Rojalin Behera, Srikanta Patnaik
Online Hostel Management System Using Hybridized Techniques of Random Forest Algorithm and Long Short-Term Memory

Hostel management is a Web application that is created for managing various hostel activities that limit physical labor and makes jobs much easier for all the users of the app. Many universities are using the traditional procedure for storing data. This has a bad effect on the efficiency of the institution. This Web application provides the users a user-friendly GUI that makes all hostel-related activities easier than before. Machine learning is used in every module such as registration, token booking, room allotment and leave form module to make all these processes quicker and support multiple users at the same time. Machine learning facilitates systems to enrich their ability to automatically improve their functionality through learning. These kinds of systems use datasets to learn and train themselves through it. This training does not even require human assistance or intervention. Supervised machine learning techniques involve a set of predefined training dataset to train themselves. Unsupervised machine learning techniques explore datasets to infer structure for unstructured data. This paper focuses on identification of the drawbacks of the existing system which leads to the designing of computerized GUI-oriented system. LSTM is used to enhance the effectiveness of the computerized system. Random forest algorithm is a supervised learning method for tasks such as classification and prediction that involve the construction of multiple decision trees at training time and outputting the class that is the mode of the classes in classification or the mean prediction of the individual trees in regression. The reason that we use random forest algorithm is that it does not overfit data for large datasets which is suitable in our case. It combines the result of one or more decision trees to avoid overfitting. They are flexible and provide results with high accuracy. Scaling of data is not necessary.

S. Suriya, G. Meenakshi Sundaram, R. Abhishek, A. B. Ajay Vignesh
Improving Accuracy of Software Estimation Using Stacking Ensemble Method

In software project management, effort estimation plays an important role in prediction of time and budget values in the initial stages to help in software project planning. Software estimation will be deemed accurate if the project estimator has a good overview of the project issues to take the project to its completion within budget, effort and time and satisfying the prescribed quality. Traditional and parametric estimation measures have been mostly inaccurate due to bias and subjective estimation. Data mining and machine learning methods have proved to be effective in tackling human bias and subjectivity if the data is preprocessed properly and when the relevant features are chosen. This research paper deals with improving accuracy of software estimation using various data preprocessing techniques, feature selection and using ensemble of diverse machine learning algorithms.

P. Sampath Kumar, R. Venkatesan
EEG-Based Automated Detection of Schizophrenia Using Long Short-Term Memory (LSTM) Network

Schizophrenia, a severe long-term mental disorder, can be found in every society and culture all around the globe. This paper focuses on detection of schizophrenia employing long short-term memory (LSTM), a deep learning technique, by extracting features from the EEG signal. The nonlinear features such as Katz fractal dimension (KFD) and approximate entropy (ApEn), along with the time-domain measure of variance values, are calculated from the EEG signal. An LSTM architecture with four hidden LSTM layers having 32 neurons in each layer is used for the detection. A sample of 6790 feature vectors was calculated from EEG signals of 14 schizophrenia patients and 14 healthy controls. The EEG signals from the subjects were recorded under resting state of eyes-closed condition. The model was trained with 6000 feature vectors and tested with the other 790 vectors. The LSTM model could accurately distinguish people suffering from schizophrenia from the controls with an accuracy of 99.0%. The accuracy of the model validates the detection efficiency of the LSTM network over the performance of the machine learning methods of feedforward neural network (FFNN) and support vector machine (SVM).

A. Nikhil Chandran, Karthik Sreekumar, D. P. Subha
Prediction of University Examination Results with Machine Learning Approach

Bloom’s taxonomy is a set of three hierarchical models which include cognitive, affective and sensory domains which are used to classify educational learning into levels of complexity and specificity. There are many different knowledge levels contained in it. Predicting the average marks before publishing the question paper will define the complexity of the question paper set. If the question paper set has low average marks, then the complexity level of the questions set is high. In this paper, based on the difficulty level of the questions, average marks that can be obtained are predicted. This is done through machine learning algorithm by using gradient descent and normal equation methods. The actual and predicted average marks are compared and tested with RMS metric to determine the accuracy of the prediction.

S. Karthik Viswanath, P. B. Mohanram, Krijeshan Gowthaman, Chamundeswari Arumugam
Surveillance System to Provide Secured Gait Signatures in Multi-view Variations Using Deep Learning

These days’ security plays the greatest role in order to provide safe platforms and surveillance becomes the essential need to provide accurate results in case of security breach. Gait recognition is a biometric technique that does not need human intervention. Through this technique, human can be uniquely identified. Gait is defined as the way human walks (human locomotion) and this can be used as biometric identity because the manner in which every person walks can uniquely categorize person. But there are many challenges like variation in viewpoints, clothing variations, carrying conditions and so on. A novel approach using deep learning is proposed to address this challenge. To address these limits, we give an idea of having multi-view gait-based recognition system, to provide robust system that is capable of handling one camera and subject walking on different angles from 0° to 180° of view. To achieve the results 3D CNN-based model is used in order to obtain spatio-temporal features. Also, the computation capability of algorithm has been enhanced via transfer learning mechanism. To carry out the experiments, OU-ISIR and CASIA-B dataset are used. Once these features are obtained, these are passed into LSTM network to obtain the long-term dependencies from gait sequence. Network is trained and tested and experiment result shows the proposed technique in this paper outperforms the state of art techniques.

Anubha Parashar, Apoorva Parashar, Vidyadhar Aski, Rajveer Singh Shekhawat
Usage of Multilingual Indexing for Retrieving the Information in Multiple Language

There is a solid requirement for the real-time ordering of enormous measures of information streaming at the rate of 10 GB/s or more. This information should be scanned for designs and the query items are time-basic in fields as different as security reconnaissance, monetary services including stock exchanging, checking the basic well-being states of patients, atmosphere notice frameworks, etc. Here, the file will be required to age-off in a little time and thus will be of limited size. Notwithstanding, such situations cannot endure any infringement of indexing latency and severe pursuit reaction times. Likewise, future greatly parallel (multicore) structures with capacity class recollections will empower rapid in-memory constant indexing, where the index can be totally put away in a high limit storage class memory. As the Web is growing, it means a number of documents on the Web are also growing so index size will also grow. The approach would be to partition a single large search index into smaller partitions and assigned to each node in the cluster. Later, when a search request comes each node in the cluster will perform a search on the local index for each query term and send the result back to the central machine, which will later combine search results from all nodes in the cluster and then send it to the user.

A. R. Chayapathi, G. Sunil Kumar, J. Thriveni, K. R. Venugopal
Using Bootstrap Aggregating Equipped with SVM and T-SNE Classifiers for Cryptography

With the application of machine learning in almost every field comes the need for systems secure enough to protect enormous amounts of data from leaking. There has also been a rise in new methods of attacks that use machine learning to attack the system. Security sometimes comes at the cost of accuracy and historically, there always has been a trade-off between artificial intelligence and data security. Thus, a system has to be created that can not only use artificial intelligence and machine learning in new and improved ways but also protect the data it works on. This paper attempts to reverse engineer one such attack in order to create software that can double as an intrusion detection system. The approach that is taken is a combination of two separate methods combined with an ensemble classifier, rather than use each approach separately thereby giving the system enough time to recognize the attack.

Neeraja Narayanswamy, Siddhaling Urolagin
Application of Apriori Algorithm on Examination Scores

As an interdisciplinary discipline, data mining is popular in education area especially when evaluating students’ learning performance. It focuses on analyzing educational assessment data to develop models for improving students’ learning experience and enhancing the institutional effectiveness. Therefore, data mining is helpful to academic institutions imparting high-quality education for their learners. Applying data mining in education enables a better understanding of how a student is progressing. This could also be used to identify ways in which one can improve educational outcomes. This paper discussed a specific example of Apriori algorithm (a popularly used in data mining) in generating association rules among scores of two exam sessions taken in succession. It was revealed that there was some strong association laying in the questions of the examination which were not that obvious in first place.

M. A. Anwar, Sayed Sayeed Ahmed, Mohammad A. U. Khan
Small Embed Cross-validated JPEG Steganalysis in Spatial and Transform Domain Using SVM

Steganalysis recognizes the manifestation of a hidden message in an artefact. In this paper, the analysis is done statistically, by extracting features that show a change during an embedding. Machine learning approach is employed here by using a classifier to identify the stego image and cover image. SVM is used as a classifier in this paper. A comparative study is done by using steganographic schemes from spatial plus transform domain. The two steganographic schemes are LSB matching and F5. Six unlike kernel functions and four diverse samplings are used for classification. The percentage embedding is as low as 10%.

Deepa D. Shankar, Adresya Suresh Azhakath
Performance Analysis of Fruits Classification System Using Deep Learning Techniques

In agriculture sector the problem of identification and counting the number of fruits on trees plays an important role in crop estimation work. At present, manual counting of fruits and vegetables is carried out. Manual counting has many drawbacks as it is time consuming and is labor intensive. The automated fruit counting approach can help crop management system by providing valuable information for forecasting yields or by planning harvesting schedule to attain more productivity. This work presents an automated fruit maturity detection and fruit counting system using image processing. But, it is now a critical role with image processing and using computer vision to identify and recognize fruits in real time. This research introduces an integrated and effective technique for fruit detection and recognition running on an embedded system in a neural compute stick (NCS) Movidius at fast frames per second (FPS). This can be achieved by applying in-depth computer vision training. Using deep learning techniques and OpenCV libraries, suggested fruit detecting through live streaming. This includes the MobileNet software single-shot detector (SSD) algorithm that is equipped by Caffe system. Raspberry Pi 3 has been used in this paper to implement this program. This helps to track and record the images, identifying and recognizing the fruits. We use neural compute stick Movidius that is used to achieve high FPS with the Raspberry Pi 3. The proposed method introduces several improvements, such as multi-scale functionality, default boxes, and deeply divisible convolution. Such enrichments allow achieving excellent accuracy in fruit recognition.

L. Rajasekar, D. Sharmila, Madhumithaa Rameshkumar, Balram Singh Yuwaraj
Early Detection of Locust Swarms Using Deep Learning

Locust outbreaks have contributed to an upheaval of pre-harvest crop losses around the globe, especially in Sub-Saharan Africa, the Middle East and parts of South-Eastern Asia. They form due to intermittent geographical, biological and meteorological factors. Current methods of forecasting locust swarms involve periodic inspections that are manually conducted. This can be a time-consuming and laborious task. Satellite forecasting techniques have recently been implemented that improved the forecasting of formation of such swarms based on patterns of weather and soil data collected. However, locust plagues continue to form in certain regions causing large-scale damage to crop produce and creating discomfort among the inhabitants of these vulnerable areas. The recent improvements in the effectiveness of deep learning and computer vision algorithms can be used to improve the efficiency of spotting the formation of such swarms so that appropriate measures can be taken sooner. A model that makes use of convolutional neural networks that detects the presence of Locusts in the given setting and outputs a count of the insect present in the area is proposed. The outputs obtained show that the proposed deep learning implementation is accurate up to 83% and can be improved further.

Karthika Suresh Kumar, Aamer Abdul Rahman
Prediction of lncRNA-Cancer Association Using Topic Model on Graphs

Long non-coding RNAs (lncRNA) are non-coding transcripts with length greater than 200 nucleotides, which plays a crucial role in genetic and epigenetic regulations of molecular functions. Many studies have confirmed the presence of the aberrant expression of some lncRNAs in various tumors. An integrated analysis of lncRNA interactions with proteins, micro-RNAs (miRNA), and diseases is central in determining cancer associations of lncRNAs. Representing these interactions as a graph is one of the effective modeling techniques for understanding their functional characteristics. In this work, a network for each lncRNA is prepared by considering their interactions with proteins, miRNAs, and different cancers, to understand the regulatory role of lncRNAs in cancer. Latent Dirichlet allocation (LAD)-based topic model was applied on these interaction graphs to annotate lncRNAs as oncogenic and tumor suppressive. This paper attempts to link LDA and graph-structured biological data by proposing an analogy of bag of words (BoW) to graphs of words (GoW) where node weights replace term frequencies of documents in BoW. The proposed method is compared with other classic clustering algorithms and gives 11% improvement in F-score.

Manu Madhavan, Reshma Stephen, G Gopakumar
Sales Prediction Using Linear and KNN Regression

An important part of present-day business intelligence is sales prediction. Sales prediction can be termed a complex problem, and it gets harder in the case of lack of data or missing data values, and the presence of outliers. Sales prediction is more of a regression problem than time series. Using machine learning algorithms, we can find complicated patterns in the sales dynamics including various risk variables as well, using supervised machine learning methods. Sales forecasting plays a huge role in a company’s success. An accurate sales prediction model can help businesses find potential risks and make better knowledgeable decisions. This paper aims to analyze the Rossmann sales data using predictive models such as linear regression and KNN regression. An accurate sales prediction can benefit a business by helping save money on excess inventory, planning properly for the future, and increasing the profit earned. Thus, it is also important to evaluate the model using statistical methods like RMSE and MAPE. The results are used in understanding which is a more suitable classifier for sales prediction.

Shreya Kohli, Gracia Tabitha Godwin, Siddhaling Urolagin
Image Captioning Using Gated Recurrent Units

With the introduction of deep neural networks, there has been an increase in its usage due to its high-performance capabilities and its ability to deliver astounding results when classifying and predicting images, audio, and text. Image captioning allows machines to view images like how humans do and provide them with the knowledge and power to understand visual objects presented to them. To caption images, it is required to use both convolutional neural network (CNN) and recurrent neural network (RNN). The method followed here utilizes the gated recurrent unit (GRU) of the RNN instead of the commonly used long short-term memory (LSTM). The FLICKR8K dataset is used for accomplishing the task. The results are then evaluated using the bilingual evaluation understudy (BLEU) metric to compare the captioned text to human translation. It is observed that the GRU method surpasses the other methods when compared with the BLUE-4 metric.

Jagadish Nayak, Yatharth Kher, Sarthak Sethi
A BERT-Based Question Representation for Improved Question Retrieval in Community Question Answering Systems

Community question answering (CQA) services have become a prominent place for seeking answers from experts and sharing knowledge in any domain. Retrieving semantically related historical questions for a new query is a critical task in CQA to eliminate duplicate questions and to avoid indefinite waiting time to get responses. One major challenge in question retrieval is the lexical gap between the new question and the question in the archive that is already answered. This problem can be eliminated to a great extent by giving proper embedding for questions. Traditional bag-of-words (BOW) representation for text is being replaced by semantic embedding models like word2vec, Glove, and FastText and the newly released Bidirectional Encoder Representations from Transformers (BERT) model. In this paper, we propose a modified BERT embedding using the topic information obtained by LDA on top of questions preprocessed using the RAKE keyword extraction algorithm. We perform question retrieval based on cosine similarity of question vectors and evaluated the accuracy on the Quora question pair dataset. Results show that the proposed question embedding improves the performance of question retrieval compared with competing models.

C. M. Suneera, Jay Prakash
Kinect-Based Outdoor Navigation for the Visually Challenged Using Deep Learning

In this paper, we propose an outdoor navigation system, intended for people with visual impairments. Our system makes use of a Microsoft Kinect which is reconfigured for mobile use with a portable power supply. An object detection model was trained to detect commonly found obstacles on roads, namely cars, pedestrians, bicycles and motorcycles, based on the inputs from the Kinect. In the process, we select an optimal object detection model for an embedded environment by carrying out extensive training, benchmarking and experimentation on three single shot detection models (SSD) with different feature extractors and a RetinaNet model, while also applying quantization techniques to obtain real-time performance with relatively minor losses in performance. The detections from the network are leveraged to calculate the distance between the person and the object detected, using the depth map from the Kinect, and the information is relayed to the user using a text-to-speech system, through Bluetooth earphones paired to the system. The entire setup is constructed on a white cane, where a Raspberry Pi 3B is connected to the Kinect for reading the input frames and performing onboard processing. The results of testing the model in outdoor footage indicate its viability as a tool for outdoor navigation.

Anand Subramanian, N. Venkateswaran, W. Jino Hans
Prediction of Stock Market Prices of Using Recurrent Neural Network—Long Short-Term Memory

The prediction of stock market prices is a complex and tedious task that usually requires human assistance to perform predictions. Because of the highly interdependent nature of the stock prices, traditional batch processing methods cannot be utilized effectively for the analysis of the stock market. Hence, this research work focuses on the prediction of the closing price of a stock using long short-term memory (LSTM) by taking the open price, high price and low price on a specific day as the features for prediction. Stocks of five top technology companies for two years have been analyzed. It also discusses how the performance of the LSTM model varies with different optimizers and suggests the best optimizer for the stock price prediction.

Haritha Harikrishnan, Siddhaling Urolagin
Identifying Exoplanets Using Deep Learning and Predicting Their Likelihood of Habitability

In the last decade, exoplanet study has taken a huge leap in its ability to not only discover exoplanets but also study the potential of habitability on these planets. Much of this credit is due to the launch of the Kepler Space Telescope that was launched on March 7, 2009, by NASA. Humanity’s century-old quest to find extra-terrestrial life was reinvigorated at the launch of Kepler Space Mission. Primary scientific objective of Kepler Space Mission was to explore the structure and diversity of planetary systems. However, this study soon proved to be more challenging than previously expected. Many of these planets were at the mission’s detection sensitivity, and to accurately determine the occurrence rate of these planets, human intervention was required, and classification was a slow process. This paper presents a method to automatically classify transit signals into exoplanets or non-exoplanets based on different methods of signal preprocessing and a trained deep learning model. For signal preprocessing, this paper compares performance metrics of filtering techniques such as Savitsky Golay filters and Gaussian filters. Use of Gaussian filter is coupled with normalization and standardization steps. We then train a convolutional neural network model to predict whether a given signal is an exoplanet or caused by astrophysical or instrumental phenomena. And finally, in order to comment on the habitability of the identified planets, we study the planetary characteristics using Naïve Bayes algorithm.

Somil Mathur, Sujith Sizon, Nilesh Goel
Use of Artificial Intelligence for Health Insurance Claims Automation

The research focuses on the automation of health insurance claims by using artificial intelligence, more specifically, machine learning. In today’s world, where every bit of a record collected is considered information and where every bit of this information plays a vital role in the future decisions, traditional, and manual methods of determining whether a claim made are authentic or fake, of deciding whether to accept or reject that claim in the health insurance business are no longer viable. On the other hand, artificial intelligence the ability of machines to perform equal or more than a human mind is taking over the world. Applying those abilities in place of traditional claim assessment, we get a system that is not only efficient and fast, but it also finds the trends and patterns in the data that were previously not known to exist. The motivation behind this project is to save time, be efficient, avoid tedious manual work, and most importantly human error. The implemented system will help to distinguish between absolutely deserved and undeserved health insurance claims.

Jaskanwar Singh, Siddhaling Urolagin
Sentiment Analysis and Prediction of Point of Interest-Based Visitors’ Review

Sentiment analysis is gaining popularity due to requirements to understand the opinions expressed by users. Even in the tourism industry, the applicability of sentiment analysis is vital. Many tourists express their opinions in terms of reviews on social media. The main motivation for analyzing tourist review data is to improve the services and their stay. Also, many prominent tourists would influence by opinions expressed by early tourists. In this research, the tourist review data which is collected from various social media is analyzed. The sentiment classification is then carried out using feature method term frequency-inverse document frequency (TF-IDF) and classifiers. On hold out the evaluation method, the classification rate of 80.0 and 74.0% observed for Support Vector Machines (SVM) and Naïve Bayes classifier, respectively.

Jeel Patel, Siddhaling Urolagin
Soil Analysis Using Clustering Algorithm in Davao Region

The use of data mining and machine learning is common on the advent of artificial intelligence in the field of modern agriculture. The use of clustering algorithm is to classify each data point into a specific group. The soil test report dataset was preprocessed and feed on Rapid Miner and used X-means and K-means clustering algorithms to discover knowledge. The result clusters of locations and chemical characters’ pH, P, K and N were very good in frequency distribution of samples. The analysis and performance of each centroid are far more acceptable due to p-values are less than 0.05. Average sample distance from each centroid is also in acceptable values. For future endeavors, more experiments on clustering the chemical characteristics on each type of crops, soil type, and soil test request date submission from the dataset.

Oneil B. Victoriano
Gold Tree Sorting and Classification Using Support Vector Machine Classifier

Sorting and classification of child items from gold tree is a challenge over the years. The various subcomponents (cast) in a gold tree are known as child items. Recently, some algorithms have been proposed to sort child items in gold tree. The shortfalls of the existing methods are high misclassification of child items. To overcome the above challenges, SVM and GLCM-based approach is proposed. GLCM feature is extracted from the test and train images which are fed to the multiclass SVM classifier. The classification of image is done with the help of SVM classifier. For training and validation purpose, around 1000 images are used and labeled into 15 different classes. The proposed algorithm provides better quality metrics compared to other existing classification algorithms. The main application of proposed algorithm is in Gold and ornament industries to improves productivity.

R. S. Sabeenian, M. E. Paramasivam, Pon Selvan, Eldho Paul, P. M. Dinesh, T. Shanthi, K. Manju, R. Anand

Computational Intelligence

Frontmatter
Dynamic Economic Dispatch Using Harmony Search Algorithm

In this paper, harmony search is applied to the dynamic economic dispatch problem. Dynamic economic dispatch being a constrained optimization problem, and an optimal tool plays an important role to control the cost and governs optimal setup of power demands over a definite period of time. This method is used to minimize the cost function and optimal settings of generator units with projected load demand over a certain interval of time. The major objective is to maintain economical operation of power system, within its constraint. A random search technique known as harmony search algorithm is used to optimize the generating cost of an IEEE 10 unit generating system with a reduced convergence time. The system is studied for 24 hours on hourly basis for dynamic load dispatch.

Arun Kumar Sahoo, Tapas Kumar Panigrahi, Jagannath Paramguru, Ambika Prasad Hota
Packing Density of a Tori-Connected Flattened Butterfly Network

In a parallel computer system, interconnection network is the central factor for the success and credibility of massively parallel computer (MPC) systems. The overall successes of the MPC system ascertain using number of nodes in its underlying interconnection networks. A Tori-connected flattened butterfly network (TFBN) is one sort of hierarchical interconnection network included many basic modules, where the basic modules are flattened butterfly networks and higher-level modules are torus networks. In this paper, we bring up the architecture of TFBN, determine the packing density of TFBN, and compared with other networks. It is disclosed that the packing density of the TFBN is higher than 2D_Mesh, 2D_Torus, TESH, TTN, and MMN. Thus, TFBN is suitable network for the VLSI realization and it will be a will be a possible choice for the future leading MPC systems.

Md. Abdur Rahim, M. M. Hafizur Rahman, M. A. H Akhand, Dhiren K. Behera
A Study on Digital Fundus Images of Retina for Analysis of Diabetic Retinopathy

Diabetes is a disorder that occurs due to high level of blood sugar in the body. Diabetes creates eye deficiency called as diabetic retinopathy. The symptoms in the retina area are fluid drip, exudates, hemorrhages, and microaneurysms. In modern medical science, images are important tool for precise diagnosis. This paper presents a study on retina images for analysis of diabetic retinopathy. The edges contain great deal of information about the locations of the objects, their shapes, and their sizes. Edge detection is basically a method of segmenting an image into regions of discontinuity. It plays an important role in digital image processing and a practical aspect of life. The analysis begins with preprocessing by converting RGB image into grayscale image. Then, the grayscale image has been converted into a binary image and lastly removing noise by using median filter. After that, various edge detectors like Canny edge detector, Laplacian of Gaussian, Prewitt and Sobel operators are used for edge detection of a digital fundus image. The statistical measurement in terms of accuracy 95%, sensitivity 94%, and specificity of 96% was obtained for edge detection by Sobel operator. This method outperforms compared to other edge detection techniques.

Cheena Mohanty, Sakuntala Mahapatra, Madhusmita Mohanty
Design of Mathematical Model for Analysis of Smart City and GIS-Based Crime Mapping

In this paper, we review most of the journal and Web sources related to smart city to develop a mathematical model for smart city. From several definitions of smart city, we derive the properties to put an algorithm which evaluates the city as a smart. This paper also proposes smart city crime mapping using the smart technology geographic information system computing. As GIS is more suitable for investigating crime and providing evidence, in smart city the crime rate and criminals are increased due to several factors and it is challenging for police department to finding and controlling them with minimum efforts. The job of police department is not only identifying the crime but also preventing the crime before it happens. The mathematical computing with GIS is a simple, useful, and cost-effective solution for crime investigations, finding evidence, and crime preventions. We have taken Bhubaneswar as a case study for our model.

Sunil Kumar Panigrahi, Rabindra Barik, Priyabrata Sahu
Pattern Storage and Recalling Analysis of Hopfield Network for Handwritten Odia Characters Using HOG

Auto-associative neural network is a well-known memory model for pattern storage and recall in the field of pattern recognition. In this paper, we present analysis of pattern storage capacity and recall efficiency for handwritten Odia characters with a recurrent auto-associative neural network named Hopfield network using histogram of oriented gradients (HOG) features. Correct and successful recalling of original stored patterns, noisy version of patterns and new patterns (not used in training) of each class are analysed in terms of recalling efficiency. Detail description of our experiment with NIT Rourkela Odia handwritten dataset is demonstrated in this paper, and the state-of-the-art result is remarkable and comparable with other proposed methods.

Ramesh Chandra Sahoo, Sateesh Kumar Pradhan
A Distributed Solution to the Multi-robot Task Allocation Problem Using Ant Colony Optimization and Bat Algorithm

The multi-robot task allocation problem is a very active search field in autonomous multi-robot systems. It consists of three elements: (i) a set of tasks requiring some supports, (ii) a set of robots offering some supports and (iii) an objective function. The solution of such a problem is to assign tasks to robots, while optimizing the considered objective function. We consider search and rescue scenarios: tasks are survivors, robots are unmanned aerial vehicles and the objective function is to rescue the maximum number of survivors. We propose a fully distributed solution made up of the following two main steps: (i) the first step uses ant colony optimization to construct overlapping task bundles for each robot; and (ii) the second step uses bat algorithm to construct disjoint task bundles for each robot. This solution has been implemented using Java programming language and JADE multi-agent platform. Simulation results show the efficiency and scalability of our solution, in terms of makespan values; and the quality of our solution is very close to the quality of an optimal solution (the difference is only 3.17%), in terms of objective function.

Farouq Zitouni, Saad Harous, Ramdane Maamri
Collective Intelligence of Gravitational Search Algorithm, Big Bang–Big Crunch and Flower Pollination Algorithm for Face Recognition

The performance of nature-inspired computational intelligence techniques in solving optimization and real-world problems leads to the growing research in this area. The nature-inspired algorithms are inspired from various domains or elements of nature social behavior of information sharing in animals, biological immune system, physical phenomenon and evolution strategy. A hybrid algorithm based on big bang–big crunch (BB-BC), gravitational search algorithm (GSA) using levy flights (taken from flower pollination algorithm) and chaotic local search is proposed in this paper to solve face recognition problem. The algorithm combines the strengths of the individual algorithms like BB-BC helps in achieving good exploration, GSA and flower pollination algorithm (FPA) both improve the exploitation part. The proposed method gives the precision in the range of 0.96 to 1.0 for the application of face recognition when tested on ORL dataset.

Arshveer Kaur, Lavika Goel
Fuzzy Logic Based MPPT Controller for PV Panel

To provide sufficient electrical energy in a clean world, distributed generation has become inevitable for utility grids. In the last few decades, photovoltaic (PV) power generation continues to grow proving itself as a promising energy source. Typically, a PV array is required to be configured with maximum power point tracking (MPPT) so that maximum available energy can be supplied to the sink. This paper proposes a fuzzy logic-based controller for maximum power point (MPP) monitoring of a standalone PV device under the variable temperature and irradiance conditions. Using fuzzy rule base, which is provided to boost converter switch, the desired duty ratio is generated by the fuzzy logic controller (FLC). The output terminal of DC-DC boost converter can be connected to a DC grid, and the PV panel can be used for various applications such as battery charging and discharging, hybrid electric vehicle (HEV) charging, DC/AC micro-grid integration, etc.

Mahesh Kumar, Krishna Kumar Pandey, Amita Kumari, Jagdish Kumar

IOT and Cloud Computing

Frontmatter
A Survey on Cloud Computing Security Issues, Attacks and Countermeasures

Cloud computing is an emerging technology which has laid the foundation of a new approach to computation for individuals as well as enterprises. There is an increasing trend of cloud adoption in recent years. Security has always been a major concern in cloud. Cloud platforms and computations are constantly under threats from attackers. In this paper, we will make a survey on major security concerns, attacks as well as solutions to limit the possibilities of attack in a cloud computing environment.

Deepak Ranjan Panda, Susanta Kumar Behera, Debasish Jena
Mitigating Cloud Computing Cybersecurity Risks Using Machine Learning Techniques

Cloud computing is gaining increasing adoption as it has elastic, on-demand and pay-per-use feature available in different service models like Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). With the increased adoption of the Internet of things in cities, homes, power grids, personal health monitoring systems, a huge amount of data is generated. However, the IoT devices are unable to store them. They rely on cloud storage for this purpose and use cloud computing for processing. But this computing model is suffering from cyberattacks. Cloud service providers can use machine learning techniques to detect such attacks and take measures to prevent them. In this paper, various cyberattacks faced by the cloud have been surveyed and discussed the machine learning techniques proposed by researchers to mitigate those attacks.

Bharati Mishra, Debasish Jena
A Modified Round Robin Method to Enhance the Performance in Cloud Computing

Cloud computing is a new age technology. The quality of services in the cloud depends on priority of the job, time taken to transfer the specific task, length of the task, number of CPUs available for every virtual machine and capacity of virtual machine. Previously, there were many researches done on the issue of task scheduling, about workflow and job scheduling in cloud computing. Scheduling in cloud has to cope with many issues like allocation of small request, workload characterization, different types of request, real-time allocation and provision for queuing. Throughput of the system depends on resources allocation to process which is performed by the CPU scheduling algorithms. The objective of scheduling algorithm is to minimize the waiting time, turnaround time, response time and context switching and highest degree of utilization of CPU. Round Robin is a pre-emptive algorithm which works on fixed time quantum and used in time sharing system. Round Robin is also known for its fairness task distribution and optimum utilization of CPU. In this paper, we have tried to introduce an improved Round Robin algorithm by calculating the quantum number by taking the average of burst time of all tasks which arrive at the same time and finish the execution of the tasks which have less quantum number and the task which has greater quantum number will be executed as per the existing Round Robin method. By simulating the proposed algorithm, we have observed that the waiting time and turnaround time decrease considerably.

Amit Sharma, Amaresh Sahu
Channel Capacity Under CIFR Transmission Protocol for Asymmetric Relaying

Most of the basic relay communication system can be modelled as a three-node system where the transmitter communicates with the relay and then relay communicates with the destination. To forward signal towards destination, relay station can use different approaches such as decode-and-forward approach, demodulation and forward approach, compress and forward approach, amplify-and-forward approach. In this paper, we have evaluated capacity of channel under CIFR transmission scheme for asymmetric relaying using decode-and-forward approach. In this system, fading channel between source to relay and fading channel between relay to destination are not same. To find out channel capacity under CIFR transmission protocol (CCIFR), we have used expression of moment generating function (MGF) of overall SNR of the system. After evaluating expression of channel capacity under CIFR transmission protocol (CCIFR), we have studied behaviour of channel in terms of CCIFR w.r.t. SNR and channel parameters k and µ.

Brijesh Kumar Singh, Mainak Mukhopadhyay
Maximizing the Lifetime of Heterogeneous Sensor Network Using Different Variant of Greedy Simulated Annealing

Replacing battery is very difficult task in wireless sensor network (WSN). Energy and lifetime are two most important parameters in wireless sensor network. In heterogeneous sensor network, different types of sensor are used with different capabilities. Common sensors (or monitoring sensor) with low energy are used for sensing events and to send the data to super node. Super nodes have higher energy and communication range. For connectivity and data transmission to base station, super nodes are used. In point coverage WSN, selecting a group of monitoring sensor optimally is very crucial, used for sensing and transmitting data to destination. In this paper, three variants of greedy simulated annealing are used to select monitoring sensor set for given targets. Monitoring sensors set are selected in such way, so that they can satisfy coverage of all targets as well as contains minimum number of sensor to cover these targets. This set of sensor will send data to nearest super node using shortest path algorithm. When this sensor is set in use, every sensor of this set dissipates some energy, and other sensors kept in idle sleep mode dissipate less energy. This greedy method decreases energy dissipation and increases lifetime of sensors. It is confirmed by simulation results.

Aswini Ghosh, Sriyankar Acharyya
Fire Detection and Controlling Robot Using IoT

Fire accidents are a very common man-made or technological phenomenon now and lead to mass casualties and damage to property and wealth. It is not only that poses a threat to living beings, toxic gases and fumes emitted when materials burn can cause harm to the respiratory systems of people nearby. Since fire accidents cannot be stopped altogether, the most that can be done is to prevent them or even if they occur, to put them out as early as possible. Early detection of fire using alarms and notifications can make people aware of any fire breakout in their surroundings and help in the prevention of a potentially major catastrophe. This project is aimed to create a robot that can extinguish the fire with ease. This proposed project is expected to produce a small but very powerful and versatile robot. It detects fire in the disaster-prone area. The project will be designed with the help of a Raspberry Pi. Using IoT and a Wi-Fi module, we can control our robot manually using mobile phones. A camera mounted on the body of the robot helps the user see the surroundings of the robot via a live feed, and the Raspberry Pi reflects the current status of the robot. Once the robot detects the fire, the owner is notified on their device using a short message service (SMS). Depending on the severity of the fire, the fire department is also notified, so that appropriate actions may be taken to prevent any disasters. Fire and smoke can be detected using thermal sensors and an ionization smoke detector that can be installed in the enclosed environment where the robot is deployed. The fire detection robot overcomes the problem of colliding with an obstacle by sensing its path with the help of an ultrasonic sensor that measures the distance between the robot and the object. On reaching its destination, the robot sprays water on the fire to put it out. Being insulated by acrylic sheets helps prevent any damage to the robot because of the fire.

Mohit Sawant, Riddhi Pagar, Deepali Zutshi, Sumitra Sadhukhan
DDoS Prevention: Review and Issues

Networks connected to the Internet are always susceptible to distributed denial-of-service (DDoS) attacks. In spite of a lot of different DDoS defense mechanisms in place, DDoS attacks still happen. These mechanisms fall under the category of DDoS detection, DDoS mitigation, and DDoS prevention. Although DDoS detection and mitigation are well defined and understood terms, DDoS prevention is used with different meanings in the literature. Concerning reflection-based DDoS amplification attacks, in this paper, we define ideal prevention and true prevention. Former is an ideal situation in which primarily the security of all the Internet hosts is well up to the mark and does not allow them to become participating members of DDoS attacks, whereas later is a practically feasible situation in which the network itself can prevent and mitigate DDoS attack within some fixed time interval. We also provide the literature review of DDoS prevention techniques and argue that the ones which conform to the definition of ideal prevention or true prevention are either not dynamic, are computationally expensive, or not scalable; thus, practically not feasible.

Shail Saharan, Vishal Gupta
Early Detection of Foot Pressure Monitoring for Sports Person Using IoT

The foot pressure monitoring sensor is used to detect the early stage of pressure variation for sports person using IOT. The main aim behind developing this idea is to analyze the running and walking patterns of different sports, particularly in athletics, the application of pressure on the foot is highly differ on both legs. The proposed wearable sensor device is used to detect the variant pressure in early stage. Using this pressure data, we can easily identify any injuries occur in athletics person. Here, supervised learning algorithm of classification is used to classify the given data sets as either normal or abnormal. And this device also helps the athletes to recognize their own body movement. The piezoelectric sensors will collect the pressure data over a period of time and pump it into the cloud device. The Arduino board used to analyze the data and then send the data over cloud for clinical studies who will be constantly monitoring the athlete movement and indicate whenever it is necessary. The transmitting device is an ESP8266 module. The proposed device will reduce the risk of injuries in athletes.

A. Meena Kabilan, K. Agathiyan, Gandham Venkata Sai Lohit
Development and Simulation of a Novel Approach for Dynamic Workload Allocation Between Fog and Cloud Servers

Fog computing has been a prominent research domain in computing field in the recent years. It is a platform that brings the concept of cloud computing closer to the user at the edge of the network and thus the name “edge computing”. Fog is a cloud closer to the ground. Fog computing has reduced network latency compared to cloud computing. In this paper, we address the issue of workload distribution between fog servers and cloud servers. Effective workload distribution leads to reduced latencies and higher throughputs which will enable ISPs to retain customers and improve goodwill between them. Our approach incorporates introduction of a controller between the users and fog/cloud servers. The controller using a novel algorithm effectively distributes the dynamic traffic workload between the fog and cloud servers based on a number of factors such as priorities, resources required and delay tolerable to achieve desired quality of service.

Animesh Kumar Tiwari, Anurag Dutta, Ashutosh Tewari, Rajasekar Mohan
A Novel Strategy for Energy Optimal Designs of IoT and WSNs

Many IoT applications demand devices operating on battery, which restricts the useful life of the developed solution as most scenarios do not allow replacement of batteries. Given a target period of survivability of IoT devices, we must design devices and sub-systems meeting the estimated energy consumption rate so as meet target lifetime. Thus, for a given scenario, to meet demanding solutions offering optimal energy utilization, we propose an approach addressing energy consumption minimization right from design/development stage through deployment and finally concluding at operational level. The key parameters of design and operation relate to data acquisition, communication and security sub-systems, where security may not always be required. The approach is encouraged by the fact that about 50–60% energy used by networked devices is spent by communication system and about 30% on security/encryption mechanisms. Thus, any attempt at optimal use of energy can lead to significant increase in life of battery which is synonymous with node or system life. The framework based on the novel approach is being considered for smart buildings, smart metering and smart health systems, where deployment of nodes is challenging for reliable communication.

Rajveer Singh Shekhawat, Mohamed Amin Benatia, David Baudry
Multichannel Biosensor for Skin Type Analysis

Skin cancer is the uncontrolled growth of abnormal skin cells, and it may occur due to unrepaired DNA damage to skin cells; some of the common types are squamous cell carcinoma, basal cell carcinoma, melanoma. Squamous cell carcinoma is a type of cancer caused by an uncontrolled growth of abnormal squamous cells. Basal cells produce new skin cells as old ones die. Limiting sun exposure can help prevent these cells from becoming cancerous. The first step toward skin cancer detection is to asses skin type. Photonic crystal (PhC) is a periodic optical nanostructure used in the current work as a biosensor. PhC exhibits optical band gaps which are used for super-prisms, negative refraction, and dispersion compensation. We have created a biosensor with three sensing holes or point defects in PhC. The designed biosensor can sense three refractive indices simultaneously. By changing the properties of the sensing hole, the wavelength shifts and new resonant wavelength can be observed. Using a square lattice structure, we can distinguish between various sensing levels concerning amplitude and different wavelength values for all the three types of skin, namely Asian, Dark, and Caucasian skins. When the nano-sensing hole size is 0.22 um, the result shows a distinctly separable wavelength shift. When the sensing hole size decreases, then we cannot distinguish between Dark and Asian skin as the content of the melanin pigment present is more. We could conclude from the simulation that the designed biosensor can be used as a multichannel biosensor for skin type analysis.

V. L. Nandhini, K. Suresh Babu, Sandip Kumar Roy, Preeta Sharan
LoBAC: A Secure Location-Based Access Control Model for E-Healthcare System

In order to improve healthcare accessibility and provided efficient and faster services, modern communication system provides different technologies. These technologies offer new solutions such as simple and ubiquitous access to health data and records, provide faster and low-cost treatment, quality of care, and remote patient monitoring. However, they also bring new threats and challenges. In this paper, we discuss some gaps within the existing access control strategies for health care. Most of the access control solution does not address the proper dynamic access control which is a crucial aspect for the security of the healthcare system. To fill this gap, we introduce a secure access control model (LoBAC) to control access in the healthcare system. Our proposed solution uses the location of the user for providing secure access control views to users in the healthcare system. It not only dynamically controls the access of the user but also provides secure location assurance.

Ashish Singh, Kakali Chatterjee
Autonomous Electronic Guiding Stick Using IoT for the Visually Challenged

Visual impairment is the inability of visual perception due to various biological, genetic and neurological factors. Navigation from one place to other has been highly inevitable in man’s life, where visually impaired people face this issue on a daily basis. These people need the support of others for their daily activities, especially for navigation. They rely on the traditional walking stick, guide dogs and guides for help. But with the evolution of technology, there are various new inventions that help the visually challenged to navigate with ease in indoor and outdoor environments. The paper discusses the existing approaches and also shows the novelty of the project developed. This project aims at a low-cost, user-friendly, smart navigation guidance device that helps in navigation of the visually challenged from one place to other with the blend of IoT.

R. Krithiga, S. C. Prasanna
Handling Heterogeneity in an IoT Infrastructure

Internet of things is a captivating technology which is capable of connecting billions of smart devices via various communication technologies such as Ethernet, Wi-Fi, and Bluetooth. IoT has changed the way we operate various appliances that we use in our daily life to perform our routine activities. IoT can become further promising technology if it can handle heterogeneous devices and network more efficiently. Leveraging heterogeneity has become challenging because of emerging diverse IoT devices in market. This has lead to disjoint fragmented smart systems in which each single appliance purveyor provides proprietary solution for each device. Engineers and smart infrastructure designers face major challenge when it comes to managing heterogeneity of various device portfolios in a single IoT Infrastructure. His work handles heterogeneity by normalizing and de-normalizing data from various heterogeneous devices, at the network layer, using IPv6 protocol and transport layer using UDP protocol. This approach can handle data generated by any device irrespective of vendor. Hence, facilitate in bringing up a versatile automation environment. This work uses Contiki OS and Cooja simulator for implementing solution to manage heterogeneity.

B. M. Rashma, Suchitha Macherla, Achintya Jaiswal, G. Poornima
Recent Trends in Internet of Medical Things: A Review

The Internet of things has been taking the world by storm. Its massive rise in popularity and consistent developments has bred a plethora of novel technology solutions, used across a variety of disparate industries. Healthcare is one such industry that has piqued the interest of many researchers and tech enthusiasts. Over the years, the Internet of medical things (IoMT) has gained a lot of prominence. IoMT is a combination of medical devices, applications and healthcare IT systems, the former two being connected to the latter via networking technologies. This paper explores how IoMT has redefined the space of healthcare solutions, its application, and some challenges its implementation faces and the way ahead. It presents a comprehensive literature review on IoT in the healthcare domain as well as on specific applications and future trends.

Ananya Bajaj, Meghna Bhatnagar, Anamika Chauhan

Applications

Frontmatter
Cryptanalysis of Modification in Hill Cipher for Cryptographic Application

This paper focuses on the strengths and weaknesses of the algorithm proposed by Qazi et al. [1]. The cipher proposed by the authors is highly predictable and suffers from lack of sufficiently strong and complex intermediate operations along with Hill cipher. The Hill cipher, on its own, cannot offer foolproof security. It should be strengthened with sufficiently complex and nonlinear operations which offsets the known plaintext vulnerability which the Hill cipher exhibits. All intermediate operations introduced in the scheme proposed by Qazi et al. are very simple, linear and predictable. Such operations do not strengthen an existing cipher which is proven to be vulnerable to the known plaintext attack. The algorithm cannot stand even moderate cryptanalysis and can be very easily broken. To be more precise, the algorithm can be broken with the help of one chosen ciphertext and four chosen plaintexts. This paper also proposes suitable modifications to offset the vulnerabilities exhibited by the original cipher.

K. Vishwa Nageshwar, N. Ravi Shankar
Wearable Assistance Device for the Visually Impaired

Visually impaired individuals face tremendous issues when it comes to technical aid due to the lack in convenience of usage and the complexities of devices. Especially, the users located remotely with the lack of Internet facilities rely on walking canes or guide dogs. The proposed concept is that of a wearable assistance device for the visually impaired individuals. The purpose is to detect and recognize objects in an image for static indoor objects and inform the user regarding the same via audible feedback. A comparison of YOLO and SSD model performance mapping is carried out to determine the time taken by each. Object detection and recognition is performed by the YOLO model in online (via cloud platform) and offline (on the edge) modes with latency evaluations for two versions of Raspberry Pi.

Devashree Vaishnav, B. Rama Rao, Dattatray Bade
A Hybrid Approach Based on Lp1 Norm-Based Filters and Normalized Cut Segmentation for Salient Object Detection

The graph theory-based graph cut algorithm is successfully applied in a wide range of problems in computer vision. The normalized graph cut algorithm is one of the efficient algorithms used for the detection of salient objects in one image. The efficiency of normalized graph cut algorithm has been improved by applying Lp1 norm-based Gaussian filter and median filter, with normalized cut algorithm, on individual color channels rather than the whole image. Different experiments are conducted to prove the efficiency of these applications by considering images from different standard image segmentation datasets like Berkley image segmentation dataset and COREL image segmentation dataset. The results of the proposed methods are compared with Otsu thresholding and C-means clustering algorithms. Three validity indices are used to quantify the results.

Subhashree Abinash, Sabyasachi Pattnaik
Decentralizing AI Using Blockchain Technology for Secure Decision Making

Lately, Blockchain and artificial intelligence (AI) have become two of the most popular and revolutionary technologies. Blockchain, the key technology behind Bitcoin, is a distributed database that allows transactions to be recorded in an incorruptible and non-modifiable manner. With smart contracts, Blockchain can be used to provide a decentralized/distributed environment where multiple parties can interact with each other without any trusted third party. On the other hand, AI enables machines to have intelligence and decision-making capabilities just like humans. This paper presents a thorough survey on how Blockchain technology can be used for empowering AI. We review the literature and summarize how Blockchain can make AI secure and efficient.

Soumyashree S. Panda, Debasish Jena
Speech Recognition Using Spectrogram-Based Visual Features

Recent advancements in audio-visual speech recognition (AVSR) have established the importance of incorporating visual components in the process of speech recognition for improving robustness. Visual features possess strong potential to enhance the accuracy of existing speech recognition techniques and have become increasingly indispensable when modelling speech recognizers. The blend of machine learning techniques paired up with visual features can give promising solutions to the problem of comprehending speech in noisy environment. In this paper, we have employed two different techniques to generate hybrid audio-visual features. First, features have been extracted from video frames, and these have been combined with acoustic features. Secondly, spectrograms have been calculated from audio samples, and their features were extracted while considering them as an image. These features were then combined with visual features to yield hybrid features. Ultimately, a comparison of the two methods has been done, and the superiority of AVSR over audio only speech recognition has been demonstrated.

Vishal H. Shah, Mahesh Chandra
Deployment of RFID Technology in Steel Manufacturing Industry—An Inventory Management Prospective

With the current economic slowdown, it is high time to reduce the production cost in order to survive in competitive business environment. Automatic identification practices refer to broad range technology that are normally used to identify the products without involvement of human factor. The auto ID used in the systems, linked with automatic recording of data. The automatic system of RFID helps us to know the present status of stock in the warehouse and in production units which will be helpful in the decision-making process of purchase department and operation department. In this paper, attempt has been made to study the feasibility of deploying radio frequency identification as a tool towards inventory management practice in steel manufacturing industry for reducing the cost of production and improving operational efficiency. A well designed and structured questionnaire was administered to collect data from 100 sample respondents by the use of purposive sampling technique. The findings of the study revealed that there is direct relationship in between RFID practices with firm’s operational performances. This study will contribute to the existing knowledge base as well as industry practitioners in the area of inventory management of steel manufacturing industry.

Rashmi Ranjan Panigrahi, Duryodhan Jena, Arpita Jena
Sparse Channel and Antenna Array Performance of Hybrid Precoding for Millimeter Wave Systems

Fast data rates and higher capacity are the major promises in the communication network for the next generation. Due to technology advancement and market demands, wireless communication systems have undergone a revolution from past years. To achieve a manifold increase in capacity of the system, precoding is used at the base station. Now a day, hybrid precoding is in the air of wireless communication due to its high average achievable rate and low hardware complexity. However, finding the optimal solution to such hybrid precoding requires solving a challenging combinative problem. A known algorithm “orthogonal matching pursuit (OMP)” can provide a reasonable solution at the cost of computational complexity. In this paper, an algorithm based on the principle of orthogonal matching pursuit (OMP) is used for hybrid precoding in mmWave systems having large antenna arrays and channel sparsity. This paper gives the optimal solution for different values of channel sparsity and the number of transmitting/receiving antennas that approaches to approximate optimal unconstrained precoder. These solutions are shown by the simulation results of OMP algorithm-based hybrid precoder and show that they allow mmWave systems to approach their unconstrained performance limits. Therefore, to meet the requirements of the 5G network, the mmWave frequencies can be considered as one of the advanced technologies.

Divya Singh, Aasheesh Shukla
Electromyography-Based Detection of Human Hand Movement Gestures

Electromyography (EMG) signal is considered to be one of the most significant signals for the classification of the movements of muscle and has very good applications in artificial limb control. It is very important to find out a set of minimal features from the signal which quickly classifies the movements. This paper proposes EMG-based hand movement detection using entropy features. The experimental results show that entropy has discriminant ability to classify the movements. Combination of entropy features with other features gave much promising results compared to the results of Christos Sapsanis et al.

C. H. Shameem Sharmina, Rajesh Reghunadhan
Bluetooth-Based Traffic Tracking System Using ESP32 Microcontroller

The limitations of satellite navigation systems such as global positioning system (GPS) in indoor settings (tunnels, urban areas, etc.) are due to the deflection and distortion of satellite signals by neighboring buildings which degrades the performance of GPS. Alternative methods for vehicle tracking need to be developed for such settings. The main objective of this paper is to propose a design system that utilizes Bluetooth for traffic tracking. In our system, low-energy-enabled Bluetooth (BLE) modules serve as beacons, when such modules are placed alongside the road. BLE modules constantly advertise the information about their position to the passing BLE-enabled vehicle. When this BLE-enabled vehicle receives the position data, it transfers the data directly to the microcontroller. Moreover, the microcontroller displays the position on the LCD module. Additionally, it also saves the position data directly on the SD card for comparison with data gathered through GPS receiver.

Alwaleed Khalid, Irfan Memon
A Novel Solution for Stable and High-Quality Power for Power System with High Penetration of Renewable Energy Transmission by HVDC

In recent days, the use of renewable energy resources is generally in an increasing manner. Through the development of renewable energy resources are too long from the load stations in the southeast and east side locations. Power system dynamic issues are solved in a way, and the issues like damping and lack of inertia, subsynchronous oscillations caused by inter harmonics with AC-AC power conversion-based renewable energy power generators, comparing 18 HVDC transmission lines, a combined capacity of exceeding 130 GW, are currently under process. A new technology is proposed in a stable way for power system with high capacity of renewable energy sources. The pair of synchronous motor-generator pair (MGP) is proposed to solve the above-mentioned issues. The new expansion of generation and proposed system transmission capacities-based power sources are modeled to design the characteristics of the power grid. Small signal immovability of the structure is varied and it can be used as a suitable damping control model. The harmonic response of MGP analyzes the electromagnetic principle. The harmonic content present in the load side is heavily eliminated. The renewable energy system characterizes power electronics and used in HVDC and the renewable power generation. By this proposed model, this clean energy greatly improves the system stability and power quality.

C. John De Britto, S. Nagarajan
Blockchain Technology: A Concise Survey on Its Use Cases and Applications

The concepts of blockchain were introduced by Satoshi Nakamoto, a simple digital platform for recording and verifying the transactions as per the requirements. The reason why the blockchain has gained so much admiration is that blockchain is a single entity, and it does not own the data stored inside it. The blockchain data is hashed cryptographically, and henceforth, it is immutable, so no one can alter the data that is inside the blockchain. Moreover, the blockchain is translucent so one can track the data easily. This concise survey describes Blockchain on its use cases and its applications over the recent world today. As blockchain is an emerging technology, the concepts behind it are easily understandable for those who work on it. This paper reviewed blockchain with its applications to provide a comprehensive analysis for the academic research community.

B. Suganya, I. S. Akila
Design and Implementation of Night Time Data Acquisition and Control System for Day and Night Glow Photometer

Airglows are weak emissions originating at different heights of earth’s atmosphere. Recombination of ions, ionized by sunlight during day, luminescence arose by cosmic rays striking the upper atmosphere, and the chemiluminescence initiated mainly by oxygen and nitrogen reacting with hydroxyl ions at heights of few hundreds of kilometers are the various process causing airglow. The Day and Night glow Photometer (DNPM),emerging in the optical aeronomy laboratory of space physics laboratory, VSSC, is designed to have the capability of producing real-time measurements of airglow emissions originating at various altitude region of the earth’s near space during daytime and night time. The DNPM has a complex daytime optical system for extracting the faint airglow mixed with numerous orders of intense solar radiation background and a night time optical system, which directly measures the night airglow emission as there is no solar background. This paper reports the design and implementation of data acquisition and control system for DNPM during night. The functioning of DNPM is automated at both software and hardware level by means of Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW). National instrument PCI 6602 DAQ module is the data acquisition module used for this purpose. The functioning of the device is tested and verified successfully.

K. Miziya, P. Pradeep Kumar, C. Vineeth, T. K. Pant, T. G. Anumod
Supplier’s Strategic Bidding for Profit Maximization with Solar Power in a Day-Ahead Market

The rapid globalization of solar-based technologies has enabled widespread utilization of solar energy. This has created a new prospect of operation in modern power system due to their dependency on environmental factors, which cause uncertain landscape in day-ahead power market. In real-time operations, compromises have been made in cost and power to balance the power which reduces suppliers’ benefits. Therefore, beta probability distribution function is used to handle the adverse impact of solar irradiation uncertainty and scenarios are reduced using forward-reduction algorithm. Moreover, to measure the deviation of solar power, underestimation and overestimation cost function are used. The paper proposes a suitable bidding strategy to maximize the suppliers’ benefit to handle uncertain rivals’ behavior and uncertainty of solar power. The formulated problem is solved by gravitational search algorithm and simulation results are obtained in absence and presence of solar power on IEEE standard 30-bus test system. The obtained result proves the suitability of the proposed bidding strategy in the presence of uncertainty of solar power.

Satyendra Singh, Manoj Fozdar
Police FIR Registration and Tracking Using Consortium Blockchain

India is a developing country, and technology has played a major role in its development. We can see technological advancements in many areas, such as education, business, medical, banking, and agriculture. Unfortunately, the Indian Police Department remains devoid of technology in their systems up to certain extents. Also, with the rapid increase in population, the crime rates have increased. Therefore, it has become a gruelling task to manage these documents manually. The conventional system of visiting a police station for registering a first information report (FIR) or police complaint and getting updates needs to be replaced with a more convenient and transparent system. Hence, we propose a consortium blockchain architecture for the Indian Police Department, where all the police stations will be a part of this network. Consortium blockchain benefits from the privacy of private blockchain while leveraging the decentralized governance of public blockchain. In our proposed system, the client can register their FIR using a decentralized application. Our system ensures its acceptance and timely updates to be delivered to the victims. The scenario of FIR requires a highly secure and trustworthy workflow that has to be fast enough simultaneously.

Vikas Hassija, Aarya Patel, Vinay Chamola
KYC as a Service (KASE)—A Blockchain Approach

KYC or know-your-customer is an integral part of the on-boarding process of a customer for a company. This process requires independent and tedious verification of a customer’s identity documents by the businesses leading to wastage of resources. In this paper, we propose a solution where the submission and verification of a customer are done only once, and the results are shared with the businesses which require the information. The proposed system uses blockchain to record and manage the KYC requests and ensure transparency. The KYC data is verified using machine learning processes to ensure further efficiency in the process by reducing a significant amount of time spent on verifying the customers.

Dhiren Patel, Hrishikesh Suslade, Jayant Rane, Pratik Prabhu, Sanjeet Saluja, Yann Busnel
Enhancing Image Caption Quality with Pre-post Image Injections

This paper focuses on the effects of injecting images in image captioning architectures that use deep learning. We explore these effects using the merge architecture for image captioning. The neural network design involves using encoders that encode both images [using a convolutional neural network (CNN)] and the captions [using a long short term memory (LSTM)/gated recurrent unit (GRU)] associated with the image in two separate branches. Traditionally, in the merge architecture, the context of the image is introduced, only after the LSTM encodes the caption. This is otherwise known as the “post-inject” architecture. In this paper, we propose that injecting the image encoding from the CNN before as well as after encoding the caption shows improved quality of captions, without an increase in the number of weights to be trained. Benchmark tests in support of this conclusion were conducted on the Flickr8k dataset. This would be a “pre + post inject” architecture, which in turn produces image captions of better quality, without the overhead of having to train extra weights to attain this enhanced quality.

T. Adithya Praveen, J. Angel Arul Jothi
Integral Sliding Mode for Nonlinear System: A Control-Lyapunov Function Approach

This paper discusses a sophisticated control for nonlinear system in such a way that the state trajectories remain in predefined manifold and converges asymptotically. One of the solutions to the problem is by the use of generalized super-twisting control (GSTC) and control-Lyapunov function (CLF) where former renders to alleviate the disturbing term with continuous action such that state trajectories always remain in predefined manifold and latter brings the states to stable equilibrium point by the use of the priori Lyapunov function. These two approaches designed externally are independent to each other via integral sliding mode control (ISMC). The simulation results for the proposed algorithm is illustrated by a numerical example.

Ankit Sachan, Herman Al Ayubi, Mohit Kumar Garg
FileShare: A Blockchain and IPFS Framework for Secure File Sharing and Data Provenance

In this paper, we introduce FileShare—a secure decentralized application framework for sharing files and data provenance. It overcomes the integrity and ownership issues in the existing solutions for file sharing and data provenance. In the proposed framework, a decentralized application (DApp) on top of Ethereum is responsible for user registration and for provenance purposes. Ethereum smart contract is used to govern, manage, and provide traceability and visibility into the history of the shared content from its origin to the latest version. It employs IPFS, a distributed file system, as its data storage layer, avoiding the pitfalls of centralized storage solutions. The proposed framework utilizes an inbuilt editor to view and modify files. The files will be stored in an encrypted form on IPFS and can only be accessed in the FileShare text editor. Modify and share operations performed on shared files are recorded separately to the blockchain, ensuring high integrity, resiliency, and transparency.

Shreya Khatal, Jayant Rane, Dhiren Patel, Pearl Patel, Yann Busnel
NFC-Based Smart Insulin Pump, Integrated with Stabilizing Technology for Hand Tremor and Parkinson’s Victims

Most diseases are interrelated, and the population having multiple diseases is growing rapidly. Our proposed methodology is combining specific features of two technologies which will help in solving the problem faced by victims of diabetes and Parkinson’s. According to the survey, approximately 350–400 million people are suffering from Type 2 diabetes out of which 31% have a greater risk of developing Parkinson’s than people without diabetes. It’s high time to develop a new product which is more efficient and overcomes the shortcomings of the available products. We require medical devices that will cater to multiple health issues, disabilities, and diseases. Our proposed idea is a perfect example of such a device. We propose an idea of a device with miniatured implants and on-body sensors that use the near field communication(NFC) tag. Using this combination, the readings of the parameters will be available on the reader and can thus be transferred to a pump for further action. NFC tags can be fitted into insulin pump and glucometer can wirelessly transfer the information regarding the insulin level in the body, and how much insulin needs to be pumped. For people suffering from hand tremor and Parkinson’s, connecting the device to the body is an arduous task. Lift Laboratories has developed a spoon using the stabilizing technology for Parkinson’s victims. Incorporating this stabilizing technology into our proposed idea, it can be integrated with the wireless insulin pump device to make it convenient for the Parkinson’s victims to attach the device.

Advait Brahme, Shaunak Choudhary, Manasi Agrawal, Atharva Kukade, Bharati Dixit
Digital Forensics: Essential Competencies of Cyber-Forensics Practitioners

Technical advancement expands the usage of digital devices in today’s society. As the usage increases, the amount of cyber-crime related to data leakage also substantially increases. To deal with these cyber-crime-related activities, a crucial need of skilled cyber-forensics practitioners is essential. Cyber-forensics practitioners should equip with varied forensics frameworks to meet the technological enhancement in today’s society. This work discusses the various essential competencies required for the practitioners to prepare themselves for forensic investigation. Actually, the experts are expected to be well versed in investigation approach using various resources like computer, IoT, cloud, mobile, etc., and various commercial and open-source forensics tools. An investigation approach is taken up here to measure the effectiveness of a forensics practitioner in targeting the challenges in a forensics laboratory using multi-agent.

Chamundeswari Arumugam, Saraswathi Shunmuganathan
Hybrid Pixel-Based Method for Multimodal Medical Image Fusion Based on Integration of Pulse-Coupled Neural Network (PCNN) and Genetic Algorithm (GA)

The method by which data from multiple images is incorporated into a single image in order to enhance the quality of the image and reduce the artifacts, randomness and redundancy is known as image fusion. Image fusion plays a vital role in medical diagnosis and treatment. In this paper, a new image fusion algorithm using pulse-coupled neural network (PCNN) with genetic algorithm (GA) optimization has been proposed. Sixteen different sets of CT and PET images have been utilized to validate the performance of the proposed technique. Initially, PCNN has been applied to N layers of the image, and then, fusion coefficient is figured out of Layer N by using genetic algorithm (GA). The fused image obtained contains both functional and anatomical information which is present in individual CT and PET images. The proposed algorithm has been compared with pulse-coupled neural network (PCNN) via subjective and objective analyses. Experimental results illustrate the effectiveness of the proposed algorithm than the existing image fusion techniques.

R. Indhumathi, S. Nagarajan, K. P. Indira
Advanced Colored Image Encryption Method in Using Evolution Function

Transmission of images secretly over public networks faces a lot of challenges like content manipulation, forging, illegal access, etc. A number of image encryption methods have been suggested to address such challenges. This paper proposed a robust encryption method for the color image using discrete cosine transformation (DCT). In this work, the diffusion operator is applied on pixels directly followed by the confusion operator applied in frequency domain. The diffused image is the result of XOR process with the key produced from the logistic map applied to the original image. Then, DCT is used for converting the fixed size image blocks to make it in frequency domain. After that, the frequency coefficients are rearranged using a chaotic map. Finally, the encrypted image is resulted through inverse DCT (IDCT). The proposed method is applied to various images to check its robust property. In the result and analysis section, it is shown that the proposed method is efficient against any attacks.

Shiba Charan Barik, Sharmilla Mohapatra, Bandita Das, Mausimi Acharaya, Bunil Kumar Balabantaray
Metadaten
Titel
Advances in Machine Learning and Computational Intelligence
herausgegeben von
Prof. Srikanta Patnaik
Prof. Xin-She Yang
Prof. Ishwar K. Sethi
Copyright-Jahr
2021
Verlag
Springer Singapore
Electronic ISBN
978-981-15-5243-4
Print ISBN
978-981-15-5242-7
DOI
https://doi.org/10.1007/978-981-15-5243-4

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.