Skip to main content

2018 | Buch

Intelligent Systems Design and Applications

17th International Conference on Intelligent Systems Design and Applications (ISDA 2017) held in Delhi, India, December 14-16, 2017

herausgegeben von: Prof. Dr. Ajith Abraham, Pranab Kr. Muhuri, Dr. Azah Kamilah Muda, Dr. Niketa Gandhi

Verlag: Springer International Publishing

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

This book highlights recent research on intelligent systems design and applications. It presents 100 selected papers from the 17th International Conference on Intelligent Systems Design and Applications (ISDA 2017), which was held in Delhi, India from December 14 to 16, 2017. The ISDA is a premier conference in the field of Computational Intelligence and brings together researchers, engineers and practitioners whose work involves intelligent systems and their applications in industry and the real world. Including contributions by authors from over 30 countries, the book offers a valuable reference guide for all researchers, students and practitioners in the fields of Computer Science and Engineering.

Inhaltsverzeichnis

Frontmatter
Enhancing Job Opportunities in Rural India Through Constrained Cognitive Learning Process: Reforming Basic Education

Technological advancements in cognitive learning suggest significant changes in methods of teaching and learning process. A Constrained Cognitive Learning (CCL) model links various forms of cognitive learning methods with a restrictive domain. The main objective of this research study is to propose a CCL scheme that integrates cognitive learning theories and instructional prescriptions to achieve an effective learning environment for the basic education system in rural India. It improves both knowledge acquisition and employment in optimized way. Furthermore, our objective is that, the proposed research contributes in promoting the dialogue between professional learners, academic researchers and practitioners that increasingly brings empirical educational and research orientation into the contemporary educational environment across the rural India. Our focus is to plan such a cognitive learning environment so that the learner not only acquire knowledge but also improve their cognitive abilities to apply their knowledge for the employment and extend their knowledge depth to move towards research oriented innovative skills.

Shivangi Nigam, Abhishek Bajpai, Bineet Gupta
UML2ADA for Early Verification of Concurrency Inside the UML2.0 Atomic Components

In recent years, the Unified Modeling Language (UML) has emerged as a de facto industrial standard for modeling Component-Based Software (CBS). However, in order to ensure the safety and vivacity of UML CBS, many approaches have been proposed to verify the concurrency between interconnected components. But, rare are the works that tackle concurrency verification inside atomic components. In this paper, our purpose was the verification of the concurrency inside UML2.0 atomic components endowed with behavioral specifications described by protocol state machines (PSM). To achieve this, we propose to translate the UML2.0/PSM source component to an Ada concurrent program. Using an Ada formal analysis tool such as FLAVERS or INCA tools, we could detect the potential behavioral concurrency properties such as the deadlock of an Ada concurrent program.

Taoufik Sakka Rouis, Mohamed Tahar Bhiri, Mourad Kmimech, Layth Sliman
A New Approach for the Diagnosis of Parkinson’s Disease Using a Similarity Feature Extractor

Parkinson’s disease affects millions of people worldwide. Nowadays there are several ways to help diagnose this disease. Among which we can highlight handwriting exams. One of the main contributions of the computational field to help diagnose this disease is the feature extraction of handwriting exams. This paper proposed a similarity extraction approach which was applied to the exam template and the handwritten trace of the patient. The similarity metrics used in this work were: structural similarity, mean squared error and peak signal-to-noise ratio. The proposed approach was evaluated with variations in obtaining the exam template and the handwritten trace generated by the patient. Each of these variations was used together with the Nave Bayes, OPF, and SVM classifiers. In conclusion, the proposed approach demonstrated that it was better than the other approach found in the literature, and is therefore a potential aid in the detection and monitoring of Parkinson’s disease.

João W. M. de Souza, Jefferson S. Almeida, Pedro Pedrosa Rebouças Filho
A Novel Restart Strategy for Solving Complex Multi-modal Optimization Problems Using Real-Coded Genetic Algorithm

Genetic algorithm (GA) is one of the most popular and robust stochastic optimization tools used in various fields of research and industrial applications. It had been applied for solving many global optimization problems for the last few decades. However, it has a poor theoretical assurance to reach the globally optimal solutions, while solving the complex multi-modal problems. Restart strategy plays an important role in overcoming this limitation of a GA to a certain extent. Although there are a few restart methods available in the literature, these are not adequate. In this paper, a novel restart strategy is proposed for solving complex multi-modal optimization problems using a real-coded genetic algorithm (RCGA). To show the superiority of the proposed scheme, ten complex multi-modal test functions have been selected from the CEC 2005 benchmark functions and its results are compared with that of the other strategies.

Amit Kumar Das, Dilip Kumar Pratihar
Evaluating SPL Quality with Metrics

A Software Product Line (SPL) is a set of systems that share a group of manageable features and satisfy the specific needs of a particular domain. The features of an SPL can be used in variable combinations to derive product variants in the SPL domain. Because SPLs promote product development through reuse, it is vital to have a means to measure their quality in terms of quality attributes like complexity, reusability,… In this paper, we propose a set of metrics to evaluate the quality of an SPL at three levels: the feature model, design and code. We adapted a set of metrics for software quality and defined new metrics to deal with the inherent characteristics of SPLs, specifically the feature model and the traceability between features, design and code. Furthermore, to assist in interpreting the quality of a given SPL, we conducted an empirical study over ten open source SPLs to identify thresholds for the proposed metrics.

Jihen Maazoun, Nadia Bouassida, Hanêne Ben-Abdallah
Using Sentence Similarity Measure for Plagiarism Detection of Arabic Documents

Plagiarism detection it is a challenging task, particularly in natural language texts. Some plagiarism detection tools have been developed for diverse natural languages, especially English. In this paper, we propose, a new plagiarism detection system devoted to Arabic text documents. This system is based on an algorithm that uses a semantic sentence similarity measure. Indeed, the sentence similarity measure aggregates in a linear function between three components: the lexical-based LS including the common words, the semantic-based SS using the synonymy relationships, and the syntactico-semantic- based SSS semantic arguments properties notably semantic argument and thematic role. It measures the semantic similarity between words that play the same syntactic role. Concerning the word-based semantic similarity, an information content-based measure is used to estimate the SS degree between words by exploiting the LMF Arabic standardized dictionary ElMadar. The performance of the proposed system was confirmed through experiments with student thesis reports that promising capabilities in identifying literal and some types of intelligent plagiarism. We also demonstrate its advantages over other plagiarism detection tools, including Aplag.

Wafa Wali, Bilel Gargouri, Abdelmajid Ben Hamadou
Computer Aided Recognition and Classification of Coats of Arms

This paper describes the design and development of a system for detection and recognition of coat of arms and its heraldic parts (components). It introduces the methods by which individual features can be implemented. Most of the heraldic parts are segmented using a convolution neural networks and the rest of them are segmented using active contour model. The Histogram of the gradient method was chosen for coats of arms detection in an image. For training and functionality verification we used our own data that was created as a part of our research. The resulting system can serve as an auxiliary tool used in heraldry and other sciences related to history.

Frantisek Vidensky, Frantisek Zboril Jr.
Mining Gene Expression Data: Patterns Extraction for Gene Regulatory Networks

Gene interaction modeling is a fundamental step in the understanding of cellular functions. The high throughput technologies (microarrays, …) generate a large volume of gene expression data. However, gene expression data mining is a very complex process, it becomes necessary to analyze these data to discover new knowledge about genes and their interactions in purpose to model the Gene Regulatory Network GRN. In this paper, we compare some patterns extraction approaches used in the literature to infer Gene Regulatory Networks and we propose to use gradual patterns of the form (when A increases, B decreases) to extract knowledge about genes. Furthermore, we rely on GO Gene Ontology as a knowledge source to semantically annotate genes and to add information that can be useful in the process of knowledge extraction.

Manel Gouider, Ines Hamdi, Henda Ben Ghezala
Exploring Location and Ranking for Academic Venue Recommendation

Publishing scientific results is extremely important for each researcher. The concrete challenge is how to select the right academic venue that corresponds to researcher’s current interest and without missing the deadline at the same time. Due to the huge number of academic venues especially in the field of computer science, it is difficult for researchers to choose a conference or a journal to submit their works. A lot of time is wasted asking about the conference topics, its host country, its ranking, its submission deadline, etc. To tackle this problem, this paper proposes a recommendation approach that suggests personalized upcoming academic venues to computer scientists that fit their current research area and also their interests in terms of venue location and ranking. The target researcher and his community current preferences are taken into consideration. Experiments demonstrate the effectiveness of our proposed rating and recommendation method and show that it outperforms the baseline venue recommendations in terms of accuracy and ranking quality.

Nour Mhirsi, Imen Boukhris
Designing Compound MAPE Patterns for Self-adaptive Systems

Self-adaptive systems are able to change their own behavior whenever the software or hardware is not accomplishing what it was intended to do. In this context, the MAPE (Monitoring, Analysis, Planning, Execution) control loop model has been identified as crucial element for realizing self-adaptation in software systems. Complex self-adaptive systems often exhibit several architectural patterns in their design which leads to the need of architectural pattern composition. In this paper, we focus on modeling and composing MAPE patterns for decentralized control in self-adaptive systems. We illustrate our approach using a case study example of the fall-detection ambient assisting living system for elderly people.

Marwa Hachicha, Riadh Ben Halima, Ahmed Hadj Kacem
CRF+LG: A Hybrid Approach for the Portuguese Named Entity Recognition

Named Entity Recognition is an important and challenging task of Information Extraction. Conditional Random Fields (CRF) is a probabilistic method for structured prediction, which can be used in the Named Entity Recognition task. This paper presents the use of Conditional Random Fields for Named Entity Recognition in Portuguese texts considering an additional feature informed by a Local Grammar. Local grammars are handmade rules to identify named entities within the text. Moreover, we also present a study about the boundaries of CRF’s performance when using a result coming from any other classifier as an additional feature. Two well-known collections in Portuguese were used as training and test sets respectively. The results obtained outperform results of state-of-the-art systems reported in the literature for the Portuguese.

Juliana P. C. Pirovani, Elias de Oliveira
A Secure and Efficient Temporal Features Based Framework for Cloud Using MapReduce

A new data mining method called temporal pattern identification in cloud is developed using MapReduce: a power full feature of hadoop with temporal features. In the paper a new approach called temporal features based authentication approach has been introduced where a user will be verified by 2 phases. In phase1 the user facial features will be stored at HIB (Hadoop Image Bundle) and in second phase the user credentials will be checked by using symmetric encryption technique. The hadoop cluster can be exported to cloud based on user demand such as IAAS, SAAS, and PAAS. This cloud model is useful to deploy applications that can be use to transmit massive data over cloud environment. It also allows different kinds of optimization techniques and functionalities. The framework can also be used to give optimized security to the massive data stored by user at cloud environment. The article make use of temporal patters of user who entered into cloud by updating it in a logfile. Experimentation conducted by hiding text, image and both in video file and the performance of the cluster is monitored by using efficient monitoring tool called ganglia to provide auto-scaling functionality of the cluster.

P. Srinivasa Rao, P. E. S. N. Krishna Prasad
A Comparison of Machine Learning Methods to Identify Broken Bar Failures in Induction Motors Using Statistical Moments

Induction motors are reported as the horse power in industries. Due to its importance, researchers studied how to predict its faults in order to improve reliability. Condition health monitoring plays an important role in this field, since it is possible to predict failures by analyzing its operational data. This paper proposes the usage of vibration signals, combined with Higher-Order Statistics (HOS) and machine learning methods to detect broken bars in a squirrel-cage three-phase induction motor. The Support Vector Machines (SVM), Multi-Layer Perceptron (MLP), Optimum-Path Forest and Naive-Bayes were used and have achieved promising results: high classification rate with SVM, high sensitivity rate with MLP and fast training convergence with OPF.

Navar de Medeiros Mendonça e Nascimento, Cláudio Marques de Sá Medeiros, Pedro Pedrosa Rebouças Filho
Canonical Correlation-Based Feature Fusion Approach for Scene Classification

Vision-based scene recognition and analysis is an emerging field and actively conceded in computer vision and robotics area. Classifying the complex scenes in a real-time environment is a challenging task to solve. In this paper, an indoor and outdoor scene recognition approach by linear combination (fusion) of global descriptor (GIST) and Local Energy based Shape Histogram (LESH) descriptor with Canonical Correlation Analysis (CCA) is proposed. The experiments have been carried out using publicly available 15-dataset and the fused features are modeled by Random forest and K-Nearest Neighbor for classification. In the experimental results, K-NN exhibits the good performance in our proposed approach with an average accuracy rate of 81.62%, which outperforms the random forest classifier.

J. Arunnehru, A. Yashwanth, Shaik Shammer
A Mixed-Integer Linear Programming Model and a Simulated Annealing Algorithm for the Long-Term Preventive Maintenance Scheduling Problem

This paper addresses a problem arising in the long-term maintenance programming of an iron ore processing plant of a company in Brazil. The problem is a complex maintenance programming where we have to assign the equipment preventive programming orders to the available work teams over a 52 week planning. We first developed a general mixed integer programming model which was not able for solving real instances using the CPLEX optimizer. Therefore, we also proposed a heuristic approach, based on the Simulated Annealing meta-heuristic, that was able to handle the instances.

Roberto D. Aquino, Jonatas B. C. Chagas, Marcone J. F. Souza
Interval Valued Feature Selection for Classification of Logo Images

A model for classification of logo images through a symbolic feature selection is proposed in this paper. The proposed model extracts three global features viz., color, texture, and shape from logo images. These features are then fused to emphasize the superiority of feature level fusion strategy. Due to the existence of large variations across the samples in each class, the samples are clustered and represented in the form of symbolic interval valued data during training. The symbolic feature selection is then adopted to show the efficacy of feature sub-setting in classifying the logo images. During testing, a query logo image is classified as one of the members of three classes with only few discriminable set of features using a suitable symbolic classifier. For experimentation purpose, a huge corpus of 5044 color logo images has been used. The proposed model is validated using suitable validity measures viz., f-measure, precision, recall, accuracy, and time. The results with the comparative analysis show the superiority of the symbolic feature selection method with that of without feature selection in terms of time and average f-measure.

D. S. Guru, N. Vinay Kumar
An Hierarchical Framework for Classroom Events Classification

In this paper, a model for classroom events classification is proposed. Major classroom events which are considered in this work are drowsiness of a student, group discussion, steady and alert, and noisy classroom. These events are classified using a two level classification model. It makes use of simple threshold based classifiers for classification. In the first level, classes such as noisy classroom and drowsiness are separated from that of remaining classes based on global threshold. The global threshold is computed based on correlation coefficients, computed across the intensity values of the video frames. The correlation scores obtained from each video are used for classification. During second level, a partially labeled video is classified as a member of any of the said four classes based on the local threshold computed from each class of videos. Local threshold is computed based on the global characteristics extracted from the videos. For classification purpose, the events which are considered here are strictly mutually exclusive events. Due to the lack of classroom events video datasets, the dataset has been created consisting of 96 videos spread across 4 different classes. The proposed model is validated using suitable validity measures viz., accuracy, precision, recall, and f-measure. The results show that the proposed model performs better in classifying the said events.

D. S. Guru, N. Vinay Kumar, K. N. Mahalakshmi Gupta, S. D. Nandini, H. N. Rajini, G. Namratha Urs
Hand Gesture Recognition System Based on Local Binary Pattern Approach for Mobile Devices

Since the appearance of mobile devices, gesture recognition is being a challenging task in the field of computer vision. In this paper, a simple and fast algorithm for static hand gesture recognition for mobile device is described. The hand pose is recognized by using gentle AdaBoost learning algorithm and Local Binary Pattern features. The system is developed on an Android OS platform. The method used consists of two steps: a real-time gesture captured by a smartphone’s camera and the recognition of the hand gestures. It presents a system based on a real-time hand posture recognition algorithm for mobile devices. The aim of this work is to allow the mobile device interpreting the sign made by the user without the need to touch the screen. In this system, the device is able to perform all necessary steps to recognize hand posture without the need to connect to any distant device.

Houssem Lahiani, Monji Kherallah, Mahmoud Neji
An Efficient Real-Time Approach for Detection of Parkinson’s Disease

Parkinson’s disease is one of the most complex neurological disorders which have affected mankind since ages. Recent studies in the field of Biomedical Engineering have shown that by analyzing the Verbal Response of any human being, it is highly feasible to predict the odds of having the deadly disease. A simple analysis of an utterance of “ahh” sound by a person can help to analyze the person’s state of neurological health from a layman’s perspective. The paper initially utilizes the SVM (Support Vector Machine) Learning algorithm to predict the odds of having the Parkinson’s disease from a variety of audio samples consisting of healthy and unhealthy population. The cepstral features are used to develop a Real-Time Program for user-friendly application which asks the user to utter “ahh” for as long and as boldly as possible and finally displays whether the user has Parkinson’s Disease or not. The Real-Time Program can prove to be a helpful tool for the people as well as the medical community in general, assisting in early diagnosis of the Parkinson’s disease.

Joyjit Chatterjee, Ayush Saxena, Garima Vyas, Anu Mehra
Dual Image Encryption Technique: Using Logistic Map and Noise

Any web based computer system is susceptible to attacks from hackers who intrude a system to obtain information for illegal use as well as for sabotaging a company’s business operations. This is where we need encryption techniques and image security. Image security includes visual authentication and access control based user images identification. In this paper, we propose a secured image encryption technique using Chaos-Logistic Maps. The algorithm proposed uses Logistic Map to shuffle the image and the noise and then superimposes them. Two shares of this resulting image are then created and XORed to make the image ready for transmission. The performance analysis using histograms and correlation coefficient show that the algorithm has high security, and strong robustness. The algorithm uses a logistic map and XOR gates to achieve a multi-chaos encryption effect.

Muskaan Kalra, Hemant Kumar Dua, Reena Singh
A Memetic Algorithm for the Network Construction Problem with Due Dates

In this work, we present an effective memetic algorithm for a transportation network reconstruction problem. The problem addressed arises when the connections of a transportation network have been destroyed by a disaster, and then need to be rebuilt by a construction crew in order to minimize the damage caused in the post-disaster phase. Each vertex of the network has a due date that indicates its self-sufficiency, i.e., a time that this vertex may remain isolated from the network without causing more damages. The objective of the problem is to reconnect all the vertices of the network in order to minimize the maximum lateness in the recovery of the vertices. The computational results show that our memetic algorithm is able to find solutions with same or higher quality in short computation time for most instances when compared to the methods already present in the literature.

Jonatas B. C. Chagas, André G. Santos, Marcone J. F. Souza
Incremental Real Time Support Vector Machines

This paper investigates the problem of handling large data stream and adding new attributes over time. We propose a new approach that employs the dynamic learning when classifying dynamic datasets. Our proposal consists of the incremental real time support vector machines (I-RTSVM) which is an improved version of the support vector machines (SVM) and LASVM. On one hand, the I-RTSVM handles large databases and uses the model produced by the LASVM to train data. It updates this model to be appropriate to new observations in test phase without re-training. On the other hand, the I-RTSVM presents a dynamic approach that adds attributes over time. It uses the final model of classification and updates it with new attributes without re-training from the beginning. Experiments are illustrated using real-world UCI databases and by applying different evaluation criteria. Results of comparison between the I-RTSVM and other approaches mainly the SVM and LASVM shows the efficiency of our proposal.

Fahmi Ben Rejab, Kaouther Nouira
Content-Based Classification Approach for Video-Spam Identification

In this paper the authors have worked on YouTube comment spamming. The work has been carried out on a large and labeled dataset of text-comments. Filtration and pre-processing was done to speed up the detection, elimination of redundancies as well as to increase the accuracy. Spam flags on each set of text-comments were used to check the accuracy in implementation of classification techniques. An improved algorithm has also been proposed based on term frequencies. The results were compared based on accuracy-score and F-score considering the spam flag corresponding to each comment. Further, the accuracy of SVM model was compared with respect to size of dataset, pre-processing of data as well as with XGBoost.

Palak Agarwal, Mahak Sharma, Gagandeep Kaur
Kinematic Analysis and Simulation of a 6 DOF Robot in a Web-Based Platform Using CAD File Import

The current trend of simulator-based analysis, especially in the area of robotics had emerged broadly. This kind of simulation gives initial familiarization of the system, which is very useful for introductory level courses in robotics as well as research-based work. Simulation developed through any commercial software’s requires its installation on the user’s system for any animation or analysis. Therefore, an open source platform for this type of robot motion analysis had a much impact due to its light version, better graphics, and web-based running capability. This paper describes an efficient and very straightforward approach of building a 6 degree of freedom KGP50 robot simulation model in a web interface using WebGL technology. Here a component is first designed in SolidWorks and then it is imported directly into the WebGL-based platform utilizing a library of Three.js. The forward kinematics analysis of our KGP50 robot is presented through this simulator, which gives the idea of the whole framework and also an exploration of the KGP50 robot.

Ujjal Dey, Kumar Cheruvu Siva
Large Scale Deep Network Architecture of CNN for Unconstraint Visual Activity Analytics

Handling the issues of massive datasets for information retrieval, feature learning, is expected one of the most challenging problems in machine learning and computer vision research. The issues in this work, have been focused to maintain the data scalability problems for machine learning classifiers in social media activity analysis. The research highlights the machine learning performance techniques which can provide promising results against the large and unstructured complex data of social media activities. This work has been focused on the biologically inspired processing techniques by neural network and introduces the extension of this network to resolve the problems of complex data pertaining to human activity analysis. It is presented various architectures of CNN and several phases of visual data processing for detection and recognition problems. Some selected techniques are highlighted that create the interest for deep network learning in various domains of research under the consideration of complex data handlings. It has been introduced activation functions and sequence pooling methodology for fast training of convolutional network with massive data of unstructured human activity recognition. Overall, it is highlighted that fast training aspects of the network against large scale and complex data, can be improved by choosing activation function and pooling methodology at fully connected layers of the neural network. Moreover, the sounding techniques of deep learning and data analytics are highly applicable for human health, medicine, robotics, education and industrial applications.

Naresh Kumar
An Automated Support Tool to Compute State Redundancy Semantic Metric

Semantic metrics are quantitative measures of software quality characteristics based on semantic information extracted from the different phases of the software process. The empirical validation of these metrics is necessary required to consider them as quality indicators; which can’t be achieved only through their automatic computing based on the appropriate software tools. However, some semantic metrics are only based on theoretical formulation and require further empirical studies and experiments to validate and exploit them. This paper will take into consideration one of the theoretical metrics to be automatically calculated using various basic programs. The experimental results show that automatical computing of this metric is beneficial and fruitful in two sides. On one side, it has an efficient role in computing semantic metrics from the program functional attitude. On the other side, this step is essential to empirically validate this metric as a software quality indicator.

Dalila Amara, Ezzeddine Fatnassi, Latifa Rabai
Computing Theory Prime Implicates in Modal Logic

The algorithm to compute theory prime implicates, a generalization of prime implicates, in propositional logic has been suggested in [9]. As a preliminary result, in this paper we have extended that algorithm to compute theory prime implicates of a modal knowledge base X with respect to another modal knowledge base $$\Box Y$$ using [1], where Y is a propositional knowledge base and $$X\models Y$$ in modal system $$\mathcal {T}$$ and we have also proved its correctness. We have also proved that it is an equivalence preserving knowledge compilation and the size of theory prime implicates of X with respect to $$\Box Y$$ is less than the size of the prime implicates of $$X\wedge \Box Y$$. We have also extended the query answering algorithm in modal logic.

Manoj K. Raut, Tushar V. Kokane, Rishabh Agarwal
Fault Tolerance in Real-Time Systems: A Review

Real-time systems are safety/mission critical computing systems which behave deterministically, and gives correct reactions to the inputs (or changes in the physical environment) in a timely manner. As in all computing systems, there is always a possibility of the presence of some faults in real time systems. Due to the its time critical missions a fault-tolerance mechanism should be constructed. Fault tolerance can be achieved by hardware, software or time redundancy and especially in safety-critical applications. There are strict time and cost constraints, which should be satisfied. This leaves us to the situation where constraints should be satisfied and at the same time, faults should be tolerated. In this paper, the basic concepts, terminology, history, features and techniques of fault tolerance approach on real-time systems, are detailed and related works are reviewed for composing a good resource for the researchers.

Egemen Ertugrul, Ozgur Koray Sahingoz
Gauss-Newton Representation Based Algorithm for Magnetic Resonance Brain Image Classification

Brain tumor is a harmful disease worldwide. Every year, a majority of adults as well as children dies due to brain tumor. Early detection of the tumor can enhance the survival rate. Many brain image classification schemes are reported in the literature for early detection of tumors. Thus, it has become a challenging problem in the field of medical image analysis. In this paper, a novel hybrid method is proposed that uses the Gauss-Newton representation based algorithm (GNRBA) with feature selection approach. The proposed method is threefold. Firstly, discrete wavelet transform (DWT) is used as a pre-processing step to extract the features from the brain images. Secondly, principal component analysis (PCA) is used to address the dimensionality problem. Finally, the extracted features in the lower dimensional space are utilized by GNRBA for classification. To show the robustness of the proposed method, real human brain magnetic resonance (MR) images are used to experiment. It is witnessed from the results that the performance of the proposed method is superior as compared to the existing brain image classification methods.

Lingraj Dora, Sanjay Agrawal, Rutuparna Panda
Evaluating Different Similarity Measures for Automatic Biomedical Text Summarization

Automatic biomedical text summarization is maturing and can provide a solution for biomedical researchers to access the information they need efficiently. Biomedical summarization approaches often rely on the similarity measure to model the source document, mainly when they employ redundancy removal or graph structures. In this paper, we examine the impact of the similarity measure on the performance of the summarization methods. We model the document as a weighted graph. Various similarity measures are used to build different graphs based on biomedical concepts, semantic types and a combination of them. We next use the graphs to generate and evaluate the automatic summaries. The results suggest that the selection of the similarity measure has a substantial effect on the quality of the summaries (≈37% improvement in ROUGE-2 metric, and ≈29% in ROUGE-SU4). The results also demonstrate that exploiting both biomedical concepts and semantic types yields slightly better performance.

Mozhgan Nasr Azadani, Nasser Ghadiri
Fingerprint Image Enhancement Using Steerable Filter in Wavelet Domain

The proposed work is to enhance the features of the fingerprint image using steerable filter in wavelet domain to increase the accuracy and speed of Automatic fingerprint identification system. The proposed method uses steerable filter and wavelet. The steerable filter allows filtering process adaptively to any orientation and determining analytically the filter output as a function of orientation and the wavelet domain speeds up the computation process. The steerable filter is applied on each local blocks of approximation image of wavelet transform for tuning up the fingerprint image features and then smoothing the resultant which leads to enhanced image. Experiments are conducted on FVC databases and results show that enhancement process reveals clear visualization of fingerprint images.

K. S. Jeyalakshmi, T. Kathirvalavakumar
Privacy Preserving Hu’s Moments in Encrypted Domain

Privacy preserving image processing is an active area of research that focuses on ensuring security of sensitive images stored in an untrusted environment like cloud. Hu introduced the concept of moment invariants that are widely employed in pattern recognition. The moment invariants are used to represent the global shape features of an image that are insensitive to basic geometric transformations like rotation, scaling and translation. In view of this fact, this paper addresses the problem of moment invariants computation in an encrypted domain. A secure Hu’s moments computation is proposed based on a fully homomorphic encryption scheme. This method may be employed for feature extraction without revealing sensitive image information in an untrusted environment.

G. Preethi, Aswani Kumar Cherukuri
Ensemble of Feature Selection Methods for Text Classification: An Analytical Study

In this paper, alternative models for ensembling of feature selection methods for text classification have been studied. An analytical study on three different models with various rank aggregation techniques has been made. The three models proposed for ensembling of feature selection are homogeneous ensemble, heterogeneous ensemble and hybrid ensemble. In homogeneous ensemble, the training feature matrix is randomly partitioned into multiple equal sized training matrices. A common feature evaluation function (FEF) is applied on all the smaller training matrices so as to obtain multiple ranks for each feature. Then a final score for each feature is computed by applying a suitable rank aggregation method. In heterogeneous ensemble, instead of partitioning the training matrix, multiple FEFs are applied onto the same training matrix to obtain multiple rankings for every feature. Then a final score for each feature is computed by applying a suitable rank aggregation method. Hybrid ensembling combines the ranks obtained by multiple homogeneous ensembling through multiple FEFs. It has been experimentally proven on two benchmarking text collections that, in most of the cases the proposed ensembling methods achieve better performance than that of any one of the feature selection methods when applied individually.

D. S. Guru, Mahamad Suhil, S. K. Pavithra, G. R. Priya
Correlation Scaled Principal Component Regression

Multiple Regression is a form of model for prediction purposes. With large number of predictor variables, the multiple regression becomes complex. It may underfit on higher number of dimension (variables) reduction. Most of the regression techniques are either correlation based or principal components based. The correlation based method becomes ineffective if the data contains a large amount of multicollinearity, and the principal component approach also becomes ineffective if response variables depends on variables with lesser variance. In this paper, we propose a Correlation Scaled Principal Component Regression (CSPCR) method which constructs orthogonal predictor variables having scaled by corresponding correlation with the response variable. That is, the construction of such predictors is done by multiplying the predictors with corresponding correlation with the response variable and then PCR is applied on a varying number of principal components. It allows higher reduction in the number of predictors, compared to other standard methods like Principal Component Regression (PCR) and Least Squares Regression (LSR). The computational results show that it gives a higher coefficient of determination than PCR, and simple correlation based regression (CBR).

Krishna Kumar Singh, Amit Patel, Chiranjeevi Sadu
Automated Detection of Diabetic Retinopathy Using Weighted Support Vector Machines

Diabetic retinopathy is a complication of the eye caused by damage to the retinal cells due to prolonged suffering from diabetes mellitus and may lead to irreversible vision impairment in middle-age adults. The proposed algorithm detects the presence of Diabetic Retinopathy (DR) by segmentation of vital morphological features like Optic Disc, Fovea, blood vessels, and abnormalities like hemorrhages, exudates and neovascularization. The images are then classified using Support Vector Machines, based on data points in a multi-dimensional feature space. The proposed method is tested on 140 images from the Messidor database, from which 75 images are used to train an SVM model and the remaining 65 are used as inputs to the classifier.

Soumyadeep Bhattacharjee, Avik Banerjee
Predictive Analysis of Alertness Related Features for Driver Drowsiness Detection

Drowsiness during driving is a major cause of accidents of drivers which has socio-economic and psychological impact on the affected person. In Intelligent Transportation Systems (ITS), the detection of the drowsy and alert state of the driver is an interesting research problem. This paper proposed a novel method to detect the drowsy state of the driver based on three parameters, namely physiological, environmental and vehicular. The undertaken model proposes a simplistic approach and achieves comparable results to the state of the art with an ROC score of 81.28 and also elaborates on the specificity and sensitivity metrics.

Sachin Kumar, Anushtha Kalia, Arjun Sharma
Association Rules Transformation for Knowledge Integration and Warehousing

Knowledge management process is a set of procedures and tools applied to facilitate capturing, sharing and effectively using knowledge. However, knowledge collected from organizations is generally expressed in various formalisms, therefore it is heterogeneous. Thus, a Knowledge Warehouse (KW), which is a solution for implementing all phases of the knowledge management process, should solve this structural heterogeneity before loading and storing knowledge. In this paper, we are interested in knowledge normalization. More accurately, we firstly introduce our proposed architecture for a KW, and then we present the MOT (Modeling with Object Types) language for knowledge representation. Since our objective is to transform heterogeneous knowledge into MOT, as a pivot model, we suggest a meta-model for the MOT and another for the explicit knowledge extracted through the association rules technique. Thereafter, we define eight transformation rules and an algorithm to transform an association rules model into the MOT model.

Rim Ayadi, Yasser Hachaichi, Jamel Feki
Abnormal High-Level Event Recognition in Parking lot

In this paper, we presented an approach to automatically detect abnormal high-level events in a parking lot. A high-level event or a scenario is a combination of simple events with spatial, temporal and logical relations. We proposed to define the simple events through a spatio-temporal analysis of features extracted from a low-level processing. The low level processing involves detecting, tracking and classifying moving objects. To naturally model the relations between simpler events, a Petri Nets model was used. The experimental results based on recorded parking video data sets and public data sets illustrate the performance of our approach.

Najla Bouarada Ghrab, Rania Rebai Boukhriss, Emna Fendri, Mohamed Hammami
Optimum Feature Selection Using Firefly Algorithm for Keystroke Dynamics

Keystroke dynamics, an automated method and promising biometric technique, is used to recognize an individual, based on an analysis of user’s typing patterns. The processing steps involved in keystroke dynamics are data collection, feature extraction and feature selection. Initially the statistical measures of feature characteristics like latency, duration and digraph are computed during feature extraction. Various advanced optimization techniques are applied by researchers to mimic the behavioral pattern of key stroke dynamics. In this study, Firefly algorithm (FA) is proposed for feature selection. The performance efficiency of FA is computed and compared with existing techniques and found that the convergence rate and iteration generations to reach the optimum solution is 41% and 18% less respectively, as compared to those by other algorithms.

Akila Muthuramalingam, Jenifa Gnanamanickam, Ramzan Muhammad
Multi-UAV Path Planning with Multi Colony Ant Optimization

In the last few decades, Unmanned Aerial Vehicles (UAVs) have been widely used in different type of domains such as search and rescue missions, firefighting, farming, etc. To increase the efficiency and decrease the mission completion time, in most of these areas swarm UAVs, which consist of a team of UAVs, are preferred instead of using a single large UAV due to the decreasing the total cost and increasing the reliability of the whole system. One of the important research topics for the UAVs autonomous control system is the optimization of flight path planning, especially in complex environments. Lots of researchers get help from the evolutionary algorithms and/or swarm algorithms. However, due to the increased complexity of the problem with more control points which need to be checked and mission requirements, some additional mechanisms such as parallel programming and/or multi-core computing is needed to decrease the calculation time. In this paper, to solve the path planning problem of multi-UAVs, an enhanced version of Ant Colony Optimization (ACO) algorithm, named as multi-colony ant optimization, is proposed. To increase the speed of computing, the proposed algorithm is implemented on a parallel computing platform: CUDA. The experimental results show the efficiency and the effectiveness of the proposed approach under different scenarios.

Ugur Cekmez, Mustafa Ozsiginan, Ozgur Koray Sahingoz
An Efficient Method for Detecting Fraudulent Transactions Using Classification Algorithms on an Anonymized Credit Card Data Set

Credit card fraudulent transactions are causing businesses and banks to lose time and money. Detecting fraudulent transactions before a transaction is finalized will help businesses and banks to save resources. This research aims to compare the fraud detection accuracy of different sampling techniques and classification algorithms. An efficient method of detecting fraud using machine learning is proposed. Anonymized data set from Kaggle was used for detecting fraudulent transactions. Each transaction has been labeled as either a fraudulent transaction or not. The severe imbalance between fraud and non-fraudulent data caused the algorithms to under-perform. This was addressed with the application of sampling techniques. The combination of undersampling and SMOTE raised the recall accuracy of the classification algorithm. k-NN algorithm showed the highest recall accuracy compared to the other algorithms.

Sylvester Manlangit, Sami Azam, Bharanidharan Shanmugam, Krishnan Kannoorpatti, Mirjam Jonkman, Arasu Balasubramaniam
A Deep Convolution Neural Network Based Model for Enhancing Text Video Frames for Detection

The main causes of getting poor results in video text detection is low quality of frames and which is affected by different factors like de-blurring, complex background, illumination etc. are few of the challenges encountered in image enhancement. This paper proposes a technique for enhancing image quality for better human perception along with text detection for video frames. An approach based on set of smart and effective CNN denoisers are designed and trained to denoise an image by adopting variable splitting technique, the robust denoisers are plugged into model based optimization methods with HQS framework to handle image deblurring and super resolution problems. Further, for detecting text from denoised frames, we have used state-of-art methods such as MSER (Maximally Extremal Regions) and SWT (Stroke Width Transform) and experiments are done on our database, ICDAR and YVT database to demonstrate our proposed work in terms of precision, recall and F-measure.

C. Sunil, H. K. Chethan, K. S. Raghunandan, G. Hemantha Kumar
A Novel Approach for Steganography App in Android OS

The process of hiding information in a scientific and artistic way is known as Steganography. The information hidden cannot be easily retrieved or accessed and is unidentifiable. In this research, some of the existing methods for image steganography has been explained. These are LSB (Least Significant Bits) substitution method, DCT (Discrete Cosine Transform) and DWT (Discrete Wavelet Transform). A comparative analysis of these techniques depicted that LSB is the easiest and most efficient way of hiding information. But this technique can be easily attacked and targeted by attackers as it changes the image resolution. Using LSB technique an application was created for image steganography because it hides the secret message in binary coding. To overcome this problem a RSA algorithm was used in the least significant bits of pixels of image. Additionally, a QR code was generated in the encryption process to make it more secure and allow the quality of the image to remain as intact, as it was before the encryption. PNG and JPEG formats were used as the cover image in the app and findings also indicated the data was fully recovered.

Kushal Gurung, Sami Azam, Bharanidharan Shanmugam, Krishnan Kannoorpatti, Mirjam Jonkman, Arasu Balasubramaniam
Exploring Human Movement Behaviour Based on Mobility Association Rule Mining of Trajectory Traces

With the emergence of location sensing technologies there is a growing interest to explore spatio-temporal GPS (Global Positioning System) traces collected from various moving agents (ex: mobile-users, GPS-equipped vehicles etc.) to facilitate location-aware applications. This paper, therefore focuses on finding meaningful patterns from spatio-temporal data (GPS log) of human movement history and measures the interestingness of the extracted patterns. An experimental evaluation on GPS data-set of an academic campus demonstrates the efficacy of the system and its potential to extract meaningful rules from real-life dataset.

Shreya Ghosh, Soumya K. Ghosh
Image Sentiment Analysis Using Convolutional Neural Network

Visual media is one of the most powerful channel for expressing emotions and sentiments. Social media users are gradually using multimedia like images, videos etc. for expressing their opinions, views and experiences. Sentiment analysis of this vast user generated visual content can aid in better and improved extraction of user sentiments. This motivated us to focus on determining ‘image sentiment analyses’. Significant advancement has been made in this area, however, there is lot more to focus on visual sentiment analysis using deep learning techniques. In our study, we aim to design a visual sentiment framework using a convolutional neural network. For experimentation, we employ the use of Flickr images for training purposes and Twitter images for testing purposes. The results depict that the proposed ‘visual sentiment framework using convolutional neural network’ shows improved performance for analyzing the sentiments associated with the images.

Akshi Kumar, Arunima Jaiswal
Cluster Based Approaches for Keyframe Selection in Natural Flower Videos

The selection of representative keyframes from a natural flower video is an important task in archival and retrieval of flower videos. In this paper, we propose an algorithmic model for automatic selection of keyframes from a natural flower video. The proposed model consists of two alternative methods for keyframe selection. In the first method, K-means clustering is applied to the frames of a given video using color, gradient, texture and entropy features. Then the cluster centroids are considered to be the keyframes. In the second method, the frames are initially clustered through Gaussian Mixture Model (GMM) using entropy features and the K-means clustering is applied on the resultant clusters to obtain keyframes. Among the two different sets of keyframes generated by two alternative methods, the one with a high fidelity value is chosen as the final set of keyframes for the video. Experimentation has been conducted on our own dataset. It is observed that the proposed model is efficient in generating all possible keyframes of a given flower video.

D. S. Guru, V. K. Jyothi, Y. H. Sharath Kumar
From Crisp to Soft Possibilistic and Rough Meta-clustering of Retail Datasets

This paper investigates the problem of meta-clustering real-world retail datasets based on possibility and rough set theories. We propose a crisp then, a soft meta-clustering methods and we compare and analyze the results of both methods using real-world retail datasets. The main aim of this paper is to prove the performance gain of the soft meta-clustering method compared to the crisp one. Our novel methods combine the advantages of the meta-clustering process and the k-modes method under possibilistic and rough frameworks. Our approaches perform a double clustering (or meta-clustering) using two datasets that depend on each other consisting of the retail datasets. It uses for the meta-clustering a modified version of the k-modes method. For the new crisp meta-clustering method, we use the possibilistic k-modes (PKM) and for the soft method, the k-modes under possibilitic and rough frameworks (KM-PR) is applied.

Asma Ammar, Zied Elouedi
Improved Symbol Segmentation for TELUGU Optical Character Recognition

In this paper, we propose two approaches to improving symbol or glyph segmentation in a Telugu OCR system. One of the critical aspects having an impact on the overall performance of a Telugu OCR system is the ability to segment or divide a scanned document image into recognizable units. In Telugu, these units are usually connected components and are called glyphs. When a document is degraded, most connected component based algorithms for segmentation fail. They give malformed glyphs that (a) are partial and are a result of breaks in the character due to uneven distribution of ink on the page or noise; and (b) are a combination of two or more glyphs because of smudging in print or noise. The former are labelled broken and the latter, merged characters. Two new techniques are proposed to handle such characters. The first idea is based on conventional machine learning approach where a Two Class SVM is used in segmenting word into valid glyps in two stages. The second idea is based on the spatial arrangement of the detected connected components. It is based on the intuition that valid characters exhibit certain clear patterns in their spatial arrangement of the bounding boxes. If rules are defined to capture such arrangements, we can design an algorithm to improve symbol segmentation. Testing is done on the Telugu corpus of about 5000 pages from nearly 30 books. Some of these books are of poor quality and provide very good test cases to our proposed approaches. The results show significant improvements over developed Telugu OCR (Drishti System) on poor-quality books that contain many ill-formed glyphs.

Sukumar Burra, Amit Patel, Chakravarthy Bhagvati, Atul Negi
Semantic Attribute Classification Related to Gait

Human gait, as a behavioral biometric, has recently gained significant attention from computer vision researchers. But there are some challenges which hamper using this biometric in real applications. Among these challenges is clothing variations and carrying objects which influence on its accuracy. In this paper, we propose a semantic classification based method in order to deal with such challenges. Different predictive models are elaborated in order to determine the most relevant model for this task. Experimental results on CASIA-B gait database show the performance of our proposed method.

Imen Chtourou, Emna Fendri, Mohamed Hammami
Classification of Dengue Gene Expression Using Entropy-Based Feature Selection and Pruning on Neural Network

Dengue virus is a growing problem in tropical countries. It serves diseases, especially in children. Different diagnosing methods like ELISA, Platelia, haemocytometer, RT-PCR, decision tree algorithms and Support Vector Machine algorithms are used to diagnose the dengue infection using the detection of antibodies IgG and IgM but the recognition of IgM is not possible between thirty to ninety days of dengue virus infection. These methods could not find the correct result and needs a volume of the blood. It is not possible, especially in the children. To overcome these problems, this paper proposes classification method of dengue infection based on informative and most significant genes in the gene expression of dengue patients. The proposed method needs only gene expression for a patient which is easily obtained from skin, hair and so on. The classification accuracy has been evaluated on various benchmark algorithms. It has been observed that the increase in classification accuracy for the proposed method is highly significant for dengue gene expression datasets when compared with benchmark algorithms and the standard results.

Pandiselvam Pandiyarajan, Kathirvalavakumar Thangairulappan
Hardware Trojan: Malware Detection Using Reverse Engineering and SVM

Due to the globalization, advanced information and simplicity of computerized frameworks have left the substance of the advanced media greatly unreliable. Security concerns, particularly for integrated circuits (ICs) and systems utilized as a part of critical applications and cyber infrastructure have been encountered due to Hardware Trojan. In last decade Hardware Trojans have been investigated significantly by the research community and proposed solution using either test time analysis or run time analysis. Test time analysis uses a reverse engineering based approach to detect Trojan which, limits to the destruction of ICs in detection process.This paper explores Hardware Trojans from the most recent decade and endeavors to catch the lessons learned to detect Hardware Trojan and proposed an innovative and powerful reverse engineering based Hardware Trojan detection method using Support Vector Machine (SVM). SVM uses benchmark golden ICs for training purpose and use them for the future detection of Trojan infected ICs. Simulation process of proposed method was carried out by utilizing state-of-art tools on openly accessible benchmark circuits ISCAS 85 and ISCAS 89 and demonstrates Hardware Trojans detection accuracy using SVM over different kernel functions. The results show that Radial kernel based SVM performs better among linear and polynomial.

Girishma Jain, Sandeep Raghuwanshi, Gagan Vishwakarma
Obtaining Word Embedding from Existing Classification Model

This paper introduces a new technique to inspect relations between classes in a classification model. The method is built on the assumption that it is easier to distinguish some classes than others. The harder the distinction is, the more similar the objects are. Simple application demonstrating this approach was implemented and obtained class representations in a vector space are discussed. Created representation can be treated as word embedding where the words are represented by the classes. As an addition, potential usages and characteristics are discussed including a knowledge base.

Martin Sustek, Frantisek V. Zboril
A Robust Static Sign Language Recognition System Based on Hand Key Points Estimation

Sign language recognition is not only an essential tool between normal people and deaf, but a prospective technique in human-computer interaction (HCI). This paper proposes a robust method based on the RGB sensor and hand key points estimation. Compared with depth sensor and the wearable devices, RGB sensor has smaller size and simpler operation process. With the hand key points detection technique, the data can conquer the influence of unfavourable factors like complex background, occlusion, and different angles. During training step, 5 kinds of machine learning algorithms are used for the classification of 20 letters in alphabet, and the highest classification accuracy are realized by SVM and KNN algorithms, which are 95.54% and 97.3% respectively. Finally, a real time sign language recognition system with SVM training model is built and it’s recognition accuracy can reach 97%, which confirms that our method can effectively eliminate unfavourable factors.

Pengfei Sun, Feng Chen, Guijin Wang, Jinsheng Ren, Jianwu Dong
Multiobjective Genetic Algorithm for Minimum Weight Minimum Connected Dominating Set

Connected Dominating Set (CDS) is a connected subgraph of a graph G with the property that any given node in G either belongs to the CDS or is adjacent to one of the CDS nodes. Minimum Connected Dominating Sets (MCDS), where the CDS nodes are sought to be minimized, are of special interests in various fields like Computer networks, Biological networks, Social networks, etc., since they represent a set of minimal important nodes. Similarly, Minimum Weight Connected Dominating Sets (MWCDS), where the connected weights among the CDS nodes are sought to be minimized, is also of interest in many research application. This work is based on the hypothesis that a CDS with both the properties of minimum size and minimum weight optimized would enhance performance in many applications where CDS is used. Though there are a good number of approximate and heuristic algorithms for MCDS and MWMCDS, there is no work to the best of our knowledge, that optimizes the generated CDS with respect to both the size and weight. A Multiobjective Genetic Algorithm for Minimum Weight Minimum Connected Dominating Set (MOGA-MWMCDS) is proposed. Performance analysis based on a Wireless Sensor Network (WSN) scenario indicates the efficiency of the proposed MOGA-MWMCDS and supports the advantage of MWMCDS use.

Dinesh Rengaswamy, Subham Datta, Subramanian Ramalingam
Modeling of a System for fECG Extraction from abdECG

The objective of this paper is to move a step ahead in investigation and create a feasible, cost effective fetal ECG analysis tool for clinical practice which will be easy for usage by any non-skilled personal and provide actionable medical information such as the QRS complex of fetal ECG, fetal HR etc. In this method, a composite abdominal ECG is subjected to a pre-processing stage which involves filtering and normalization, then fed into the ‘thresholding and peak finding’ stage to detect the maternal ECG peaks. The next stage involves construction of the MLE of maternal ECG embedded in the abdominal ECG. After this, the constructed MLE which represent the maternal ECG is subtracted from the abdominal ECG to obtain fetal ECG along with a smidgen of noise. This noise which adulterates the fetal ECG is removed by filtering, done at the post processing stage. Thresholding and peak finding is done at the post processed signal to calculate the fetal HR. This paper puts forth a promising possibility of implementing the proposed algorithm in any suitable hardware model, since an average Accuracy of 76.8% and average Sensitivity of 90.7% is attained.

Rolant Gini John, Ponmozhy Deepan Chakravarthy, K. I. Ramachandran, Pooja Anand
Supervised Learning Model for Combating Cyberbullying: Indonesian Capital City 2017 Governor Election Case

Technological growth has triggered the internet usage in Indonesia, especially the social media. However, the cyberbullying that is believed has a negative effect on the victims bigger than the traditional one come as a price of this changing trend. It is not an easy task to eliminate or reduce such bad action due to several reasons. Firstly, the cyberbullying often appear in an informal language, slank or local languages. Secondly, the content must be understood from a specific context. In order to have such a tool to combat the cyberbullying, such model and tested algorithm should be made and confirmed by the linguists. This research develops a cyberbullying corpus in order to develop a supervised learning algorithm. The data have been crawled from the social media during the Indonesian capital city 2017 governor election event where the cyberbullying between the candidates supporters occurred. After the data preprocessed and classified by the experts/linguists for the feature space design.

Putri Sanggabuana Setiawan, Muhammad Ikhwan Jambak, Muhammad Ihsan Jambak
Improving upon Package and Food Delivery by Semi-autonomous Tag-along Vehicles

This paper aims to improve current last mile distribution and package delivery by introducing basic concept of delivery using Semi-autonomous Tag-along Vehicles (SaTaVs) driven by their own agency and desires. SaTaVs are introduced as vehicles capable of traveling by following leading vehicle and thus reducing requirements for their autonomy while maintaining most of the advantages. Whole system is designed as maintaining long term equilibrium with agents goal in maximizing future investment.

Vaclav Uhlir, Frantisek Zboril, Jaroslav Rozman
A Novel Multi-party Key Exchange Protocol

A key exchange protocols are generally used in order to exchange a cryptographic session key, such that no one except the communicating party must be able to deduce the keys lifetime. In this paper we propose a protocol which can be used for multi-party key exchange amongst the entities with less number of iterations, and also the same protocol can be used for sharing the messages amongst the entities, the key advantage of this protocol is when we are using this protocol for the multi-party key exchange as it requires very less result when compared with conventional two move Diffie-Hellman key exchange protocol.

Swapnil Paliwal, Ch. Aswani Kumar
NLP Based Phishing Attack Detection from URLs

In recent years, phishing has become an increasing threat in the cyberspace, especially with the increasingly use of messaging and social networks. In traditional phishing attack, users are motivated to visit a bogus website which is carefully designed to look like exactly to a famous banking, e-commerce, social networks, etc., site for getting some personal information such as credit card numbers, usernames, passwords, and even money. Lots of the phishers usually make their attacks with the help of emails by forwarding to the target website. Inexperienced users (even the experienced ones) can visit these fake websites and share their sensitive information. In a phishing attack analysis of 45 countries in the last quarter of 2016, China, Turkey and Taiwan are mostly plagued by malware with the rate of 47.09%, 42.88% and 38.98%. Detection of a phishing attack is a challenging problem, because, this type of attacks is considered as semantics-based attacks, which mainly exploit the computer user’s vulnerabilities. In this paper, a phishing detection system which can detect this type of attacks by using some machine learning algorithms and detecting some visual similarities with the help of some natural language processing techniques. Many tests have been applied on the proposed system and experimental results showed that Random Forest algorithm has a very good performance with a success rate of 97.2%.

Ebubekir Buber, Banu Diri, Ozgur Koray Sahingoz
Hand Pose Estimation System Based on a Cascade Approach for Mobile Devices

The rise in the use of mobile devices requires finding new ways to interact with this type of devices. Gestures are an effective way to interact with the mobile device and to place order to it. However, gesture recognition in this context constitute a challenging task due the limited computational capacities of this type of devices. In this work, we present a hand pose estimation system for mobile device. The gesture is recognized by using a boosting algorithm and Haar-like features. The system is designed for Android devices. The method used consists of capturing gestures by a smartphone’s camera to recognize the hand sign. It presents a system based on a real-time hand posture recognition algorithm for mobile devices. The aim of this system is to allow the mobile device interpreting hand signs made by users without the need to touch the screen.

Houssem Lahiani, Monji Kherallah, Mahmoud Neji
HMI Fuzzy Assessment of Complex Systems Usability

Testing and assessing are core activities in the development cycle of software applications, dedicated to evaluating interactive products in order to improve their quality by identifying various usability problems and defects. For complex system such as multiagent ones, usability evaluation is still an issue, and requires new test techniques to assess autonomous and interactive behaviors. This paper deals with investigation about the evaluation of the Human-Machine Interaction (HMI) in Complex Systems: A review on evaluation methods and introduce domain-specific requirements is presented; A mechanism for the evaluation of HMI is proposed. Also, an implementation of an automatic tool dedicated to the assessment of complex interactive systems based on fuzzy logic approaches is explained; A solution to automate the evaluation of the HMI that reduces the need for expert assessment and fully integrates end-users into the HMI evaluation loop is suggested. These are assessed in the urban transit control room in the city of Valenciennes, France. The comparative study deals with acceptance, motivation and perceived happiness.

Ilhem Kallel, Mohamed Jouili, Houcine Ezzedine
A Novel Hybrid GA for the Assignment of Jobs to Machines in a Complex Hybrid Flow Shop Problem

This paper, investigates a complex manufacturing production system encountered in the food industry. We consider a two stage hybrid flow shop with two dedicated machines at stage1, and several identical parallel machines at stage 2. We consider two simultaneous constraints: the sequence dependent family setup times and time lags. The optimization criterion considered is the minimization of makespan. Given the complexity of problem, an hybrid genetic algorithms (HGA) based on an improving heuristic is presented. We experimented a new heuristic to assign jobs on the second stage. The proposed HGA is compared against a lower bound (LB), and against a mixed integer programming model (MIP). The results indicate that the proposed hybrid GA is effective and can produce near-optimal solutions in a reasonable amount of time.

Houda Harbaoui, Soulef Khalfallah, Odile Bellenguez-Morineau
Selecting Relevant Educational Attributes for Predicting Students’ Academic Performance

Predicting students’ academic performance is one of the oldest and most popular applications of educational data mining. It helps to estimate the unknown evaluation of a student’s performance. However, a huge amount of data with different formats and from multiple sources may contain a large number of features supposed as not-relevant that could influence the prediction results. The main objective of this paper is to improve the effectiveness of a predictive model for students’ academic performance. For this purpose, we propose a methodology to carry out a comparative study for evaluating the influence of feature selection techniques on the prediction of students’ academic performance. In our study, F-measure parameter is used to evaluate the effectiveness of the selected techniques. Two real data sources are used in this work, Mathematics and language courses. The outcomes are compared and discussed in order to identify the technique that has the best influence for an accurate predictive model.

Abir Abid, Ilhem Kallel, Ignacio J. Blanco, Mounir Benayed
Detection and Localization of Duplicated Frames in Doctored Video

With the advent of high-quality digital video cameras and sophisticated video editing software, it is becoming easier to tamper with digital videos. A common form of manipulation is to clone or duplicate frames or parts of a frame to remove people or objects from a video. We describe a computationally efficient technique for detecting this form of tampering. After detection of frame duplication localization of frame sequence has also been done. Detection as well as localization of forgery are found better than other techniques.

Vivek Kumar Singh, Pavan Chakraborty, Ramesh Chandra Tripathi
A Novel Approach for Approximate Spatio-Textual Skyline Queries

The highly generation of spatio-textual data and the ever-increasing development of spatio-textual-based services have attracted the attention of researchers to retrieve desired points among the data. With the ability of returning all desired points which are not dominated by other points, skyline queries prune input data and make it easy to the user to make the final decision. A point will dominate another point if it is as good as the point in all dimensions and is better than it at least in one dimension. This type of query is very costly in terms of computation. Therefore, this paper provides an approximate method to solve spatio-textual skyline problem. It provides a trade-off between runtime and accuracy and improves the efficiency of the query. Experiment results show the acceptable accuracy and efficiency of the proposed method.

Seyyed Hamid Aboutorabi, Nasser Ghadiri, Mohammad Khodizadeh Nahari
SMI-Based Opinion Analysis of Cloud Services from Online Reviews

Nowadays, online reviews have become one of the most useful sources upon which cloud users can rely on to construct their purchasing decisions. This widespread adoption of online reviews results in nourishing the online opinion-based information. Analyzing and studying this kind of information help considerably in the fastidious task of cloud user decision-making. The contribution of this volume can be divided into two major parts: (i) a new opinion mining based cloud service analysis approach for the cloud service selection purpose. The proposed approach extracts and classifies user opinions from online reviews according to each cloud service property and (ii) an Opinion based cloud service ontology to effectively detect cloud service properties from reviews based on Service Measurement Index (SMI) metrics. To illustrate the proposed approach, we develop some experiments and we present the relevant results.

Emna Ben-Abdallah, Khouloud Boukadi, Mohamed Hammami
Heuristics for the Hybrid Flow Shop Scheduling Problem with Parallel Machines at the First Stage and Two Dedicated Machines at the Second Stage

In this article, we present four heuristics for solving the problem of the hybrid flow shop scheduling problem denoted HFS with parallel machines at the first stage and two dedicated machines at the second stage. We compare heuristics with lower bounds and upper bounds generated by a mathematical model.

Zouhour Nabli, Soulef Khalfallah, Ouajdi Korbaa
Breast Density Classification for Cancer Detection Using DCT-PCA Feature Extraction and Classifier Ensemble

It is well known that breast density in mammograms may hinder the accuracy of diagnosis of breast cancer. Although the dense breasts should be processed in a special manner, most of the research has treated dense breast almost the same as fatty. Consequently, the dense tissues in the breast are diagnosed as a developed cancer. In contrast, dense-fatty should be clearly distinguished before the diagnosis of cancerous or not cancerous breast. In this paper, we develop such a system that will automatically analyze mammograms and identify significant features. For feature extraction, we develop a novel system by combining a two-dimensional discrete cosine transform (2D-DCT) and a principal component analysis (PCA) to extract a minimal feature set of mammograms to differentiate breast density. These features are fed to three classifiers: Backpropagation Multilayer Perceptron (MLP), Support Vector Machine (SVM) and K Nearest Neighbour (KNN). A majority voting on the outputs of different machine learning tools is also investigated to enhance the classification performance. The results show that features extracted using a combination of DCT-PCA provide a very high classification performance while using a majority voting of classifiers outputs from MLP, SVM, and KNN.

Md Sarwar Morshedul Haque, Md Rafiul Hassan, G. M. BinMakhashen, A. H. Owaidh, Joarder Kamruzzaman
Scheduling Analysis and Correction of Periodic Real Time Systems with Tasks Migration

Satisfying a real-time system (RTS) timing constraints is a serious deal for the prevention or the decreasing of the human and economic possible problems. In this vein, multiprocessor RTS scheduling is increasingly difficult, especially since it became more complex and dynamic. In this context, scheduling correction turns out to be very necessary for a better schedulability. In fact once done at an early stage of the scheduling cycle, it can effectively reduce the potential temporal faults. It helps then avoiding the risk of a very expensive complete reiteration of the scheduling cycle, thus reduce the temporal cost.The present paper focuses on providing a scheduling correction solution that may be integrated into the scheduling analysis. The proposed correction method is based on rectifying the partitioning if the RTS is detected non-schedulable. This method is based on the migration of some tasks from the non-schedulable overloaded partitions toward the not loaded ones.

Faten Mrabet, Walid Karamti, Adel Mahfoudhi
Generating Semantic and Logic Meaning Representations When Analyzing the Arabic Natural Questions

In this paper, we provide a performance analysis of the question and describe their different tasks in Arabic language. Regardless of the approaches being studied this language, the first step is to analyze the question for extracting all the information exploited by the processes of searching for documents and selecting relevant passages. Question analysis shows that there are few studies provide semantic and logic-inference based-approaches in the Arabic. After extracting the keywords, determining the declarative form, generating the focus and the expected answer type, we transform the questions into semantic representations via the conceptual graph formalism and into logic representations using a transformation algorithm. This analysis is classified into tree modules: The first one emphasizes a preprocessing of the question; the second one generates a question transformation; and the third one provides a linguistic analysis. The goal of the first module is to extract the main features from each question (list of keywords, focus and expected answer type). The focus and the keywords are identified to retrieve the short and relevant answers located in small passages containing the accurate answer. The second module allows transforming the question into a declarative form. The third module makes some linguistic analyses that are used in the graph construction and logic representation phases. Lastly, we assess the process of analysis with examples of 5 types of questions collected in our corpus.

Wided Bakari, Patrice Bellot, Mahmoud Neji
An Arabic Question-Answering System Combining a Semantic and Logical Representation of Texts

In this paper, we present the overall structure of our specific system for generating answers of questions in Arabic, named NArQAS (New Arabic Question Answering System). This system aims to develop and evaluate the contribution of the use of reasoning procedures, natural language processing techniques and the recognizing textual entailment technology to develop precise answers to natural language questions. We also detail its operating architecture. In particular, our system is seen as a contribution, rather than a rival, to traditional systems focused on approaches extensively used information retrieval and natural language processing techniques. Thus, we present the evaluation of the outputs of each of these components based on a collection of questions and texts retrieved from the Web. NArQAS system was built and experiments showed good results with an accuracy of 68% for answering factual questions from the Web.

Mabrouka Ben-Sghaier, Wided Bakari, Mahmoud Neji
Algorithms for Finding Maximal and Maximum Cliques: A Survey

Finding maximal and maximum cliques are well-known problems in the graph theory. They have different applications in several fields such as the analysis of social network, bioinformatics and graph coloring. They have attracted the interest of the research community. The main goal of this paper is to present a comprehensive review of the existing approaches for finding maximal and maximum cliques. It presents a comparative study of the existing algorithms based on some criteria and identifies the critical challenges. Then, it aims to motivate the future development of more efficient algorithms.

Faten Fakhfakh, Mohamed Tounsi, Mohamed Mosbah, Ahmed Hadj Kacem
K4BPMN Modeler: An Extension of BPMN2 Modeler with the Knowledge Dimension Based on Core Ontologies

In this paper, we aim at enriching the graphical representation of sensitive business processes (SBPs) in order to identify and localize the crucial knowledge mobilized and created by these processes. Therefore, we develop a specific Eclipse plug-in, called «K4BPMN Modeler: Knowledge for Business Process Modeling Notation Modeler», implementing and supporting the BPMN extension «BPMN4KM». This extension was designed in a previous research project based on core ontologies in order to integrate all relevant aspects related to the knowledge dimension in sensitive business process models. Besides, we illustrated the application of some extended concepts on a model of medical care process.

Molka Keskes, Mariam Ben Hassen, Mohamed Turki
Exploring the Integration of Business Process with Nosql Databases in the Context of BPM

Business process is defined as a set of interrelated tasks or activities which allows the fulfillment of one of the organization’s objectives. Modeling business process can be applied in several domains such as healthcare, business, education, etc. Modeling such process allows to facilitate and understand the functioning of corresponding systems. Steps in the process need input data and generate new output data. Business Process Management Systems (BPMS) play the role to model, configure and execute business processes. These latters are facing new challenges toward big data area. Data in business process originate from multiple sources with a variety of formats and are generated in a high speed and hence need in one hand, a storage infrastructure gathering all data types and forms. And on the other hand, analytics infrastructure that makes those data ready for analysis is needed. Therefore, regarding the flexibility and the dynamics of the execution of learning process, Not Only SQL (NoSQL) databases should be taken into consideration. So, the idea of combining business process and NoSQL databases becomes one merging and critical research area. In this paper, we propose the adoption of a Nosql database schema with MongoDB to model learning data in the context of MOOCs. Then, we explore the idea of integrating such database with the designed and configured massive learning process.

Asma Hassani, Sonia Ayachi Ghannouchi
An Effective Heuristic Algorithm for the Double Vehicle Routing Problem with Multiple Stack and Heterogeneous Demand

In this work, we address the Double Routing Vehicle Problem with Multiple Stacks and Heterogeneous Demand, a pickup and delivery problem. The objective of the problem is to determine a set routes for a fleet of vehicles in order to meet the demand of a set of customers so that the distance travelled by the vehicles is the minimum possible, while ensuring the feasibility of the loading plan. We propose a heuristic approach based on the Simulated Annealing for solving the problem. The computational results show that our heuristic is able to find high quality solution in short computational time when compared to the exact methods found in the literature.

Jonatas B. C. Chagas, André G. Santos
Named Entity Recognition from Gujarati Text Using Rule-Based Approach

NER which is known as Named Entity Recognition is an application of Natural Language Processing (NLP). NER is an activity of Information Extraction. NER is a task used for automated text processing for various industries, a key concept for academics, artificial intelligence, robotics, Bioinformatics and much more. NER is always an essential activity when dealing with chief NLP activity such as machine translation, question-answering, document summarization etc. Most NER work has been done for other European languages. NER work has been done in few Indian constitutional languages. Not enough work is possible due to some challenges such as lack of resources, ambiguity in language, morphologically rich and much more. In this paper, to identify various named entities from a text document, rules are defined using Rule-based approach. Based on defined rules, three different test cases computed on the training dataset and achieved 70% of accuracy.

Dikshan N. Shah, Harshad B. Bhadka
A Meta-modeling Approach to Create a Multidimensional Business Knowledge Model Based on BPMN

Business processes are everywhere. To be more efficient, organizations look for a good business process modeling. In this way, to model a business process, the are a lot of knowledge which are responsible to present a part of the process in a good level of understanding. However, the issue to be addressed in this paper is how to formalize implicit pieces of knowledge figured in the business process so as to construct a business knowledge model which will be treat in a high understanding level. This paper contributes with a meta-modeling approach that the principle is to transform a business process model to a business knowledge model. The purpose of such an approach is to provide a way to automatically build a business ontology based on easy processing of business knowledge dimensions.

Sonya Ouali, Mohamed Mhiri, Faiez Gargouri
Toward a MapReduce-Based K-Means Method for Multi-dimensional Time Serial Data Clustering

Time series data is a sequence of real numbers that represent the measurements of a real variable at equal time intervals. There are some bottlenecks to process large scale data. In this paper, we firstly propose a K-means method for multi-dimensional time serial data clustering. As an improvement, MapReduce framework is used to implement this method in parallel. Different versions of k-means for several distance measures are compared, and the experiments show that MapReduce-based K-means has better speedup when the scale of data is larger.

Yongzheng Lin, Kun Ma, Runyuan Sun, Ajith Abraham
Mining Communities in Directed Networks: A Game Theoretic Approach

Detecting the communities in directed networks is a challenging task. Many of the existing community detection algorithm are designed to disclose the community structure for undirected networks. These algorithms can be applied to directed networks by transforming the directed networks to undirected. However, ignoring the direction of the links loses the information concealed along the link and end-up with imprecise community structure. In this paper, we retain the direction of the graph and propose a cooperative game in order to capture the interactions among the nodes of the network. We develop a greedy community detection algorithm to disclose the overlapping communities of the given directed network. Experimental evaluation on synthetic networks illustrates that the algorithm is able to disclose the correct number of communities with good community structure.

Annapurna Jonnalagadda, Lakshmanan Kuppusamy
A Support Vector Machine Based Approach to Real Time Fault Signal Classification for High Speed BLDC Motor

In this paper we propose a new methodology for designing an intelligent incipient fault signal classifier. This classifier can classify the fault signal. The design has been validated to a sate observer which indicates the valve controller output signal and communicate the health status of the embedded processor based valve controller in right time without any false alert signal to the actuator through FPGA processor. This has been achieved by using an SVM-based classifier and time duration based state machine modeling. The design methodology of a fault aware controller using one against all strategy is selected for classification tool due to good generalization properties. Performance of the proposed system is validated by applying the system to induction motor faults diagnosis. Experimental result for BLDC motor (which is mostly used for aircraft) valve controller, and computer simulations indicate that the proposed scheme for intelligent control based on signal classification is simple and robust, with good accuracy.

Tribeni Prasad Banerjee, Ajith Abraham
Automatic Identification of Malaria Using Image Processing and Artificial Neural Network

Malaria is a mosquito-borne infectious disease, which is diagnosed by visual microscopic assessment of Giemsa stained blood smears. Manual detection of malaria is very time consuming and inefficient. The automation of the detection of malarial cells would be very beneficial in the treatment of patients. This paper investigates the possibility of developing automatic malarial diagnosis process through the development of a Graphical User Interface (GUI) based detection system. The detection system carries out segmentation of red blood cells (RBC) and creates a database of these RBC sample images. The GUI based system extracts features from smear image which were used to execute a segmentation method for a particular blood smear image. The segmentation technique proposed in this paper is based on the processing of a threshold binary image. Watershed threshold transformation was used as a principal method to separate cell compounds. The approach described in this study was found to give satisfactory results for smear images with various qualitative characteristics. Some problems were noted with the segmentation process with some smear images showing over or under segmentation of cells. The paper also describes the feature extraction technique that was used to determine the important features from the RBC smear images. These features were used to differentiate between malaria infected and normal red blood cells. A set of features were proposed based on shape, intensity, contrast and texture. These features were used for input to a neural network for identification. The results from the study concluded that some features could be successfully used for the malaria detection.

Mahendra Kanojia, Niketa Gandhi, Leisa J. Armstrong, Pranali Pednekar
Comparative Analysis of Adaptive Filters for Predicting Wind-Power Generation (SLMS, NLMS, SGDLMS, WLMS, RLMS)

Adaptive filters play an important role in prediction. This ability of adaptive filters have been successfully used in prediction of wind-power generation. This paper focuses on the comparison between adaptive filtering algorithms in order to determine which filter produces least error for predicting wind-power generation. Algorithms such as Standard least mean square (SLMS), Normalized least mean square (NL-MS), Weighted least mean square (WLMS), Stochastic Gradient Descent least mean square (SGDLMS), Recursive least Square (RLS) are implemented. The performance of the filters is evaluated using actual operational power data of a wind farm in America. Four performance criteria are used in the study of these algorithms: Mean Absolute Error, R-squared value, Computational Complexity, and Stability of the system.

Ashima Arora, Rajesh Wadhvani
Blind Write Protocol

The current approach to handle interleaved write operation and preserve consistency in relational database system relies on locking protocol. The application system has no other option to deal with interleaved write operation. In other hand, allowing more write operations to be interleaved will increase the throughput of database. Since each application system has its own consistency requirement then database system should provide another protocol to allow more write operation to be interleaved. Therefore, this paper proposes blind write protocol as a complement to the current concurrency control.

Khairul Anshar, Nanna Suryana, Noraswaliza Binti Abdullah
Ontology Visualization: An Overview

As the main way for knowledge representation for the purpose of completely machine understanding, ontologies are widely used in different application domains. This full machine understanding makes them harder to be easily understood by a human. This necessitates the need to develop ontology visualization tools, which results in the existence of a large number of approaches and visualization tools. Along with this development direction, the number of published research papers related to ontology visualization is largely increasing. To this end, in this paper, we introduce a systemic review on different directions related to ontology visualization. In particular, we start by describing different application domains that make use of ontology visualization. Then, we propose a generic visualization pipeline that incorporates main steps in ontology visualization that could be later used as main criteria during comparing and discussing different visualization tools. By this review, we aim to introduce a general visualization pipeline that is useful when comparing ontology visualization tools and when developing a new visualization technique. Finally, the paper moves into the description of future trends and research issues that still need to be addressed.

Nassira Achich, Bassem Bouaziz, Alsayed Algergawy, Faiez Gargouri
Towards a Contextual and Semantic Information Retrieval System Based on Non-negative Matrix Factorization Technique

With the fast speed of technological evolution, information retrieval systems are trying to confront the large amount of textual data in order to retrieve pertinent information to meet users needs or queries. Information retrieval systems depend also on the user’s query who often finds difficulty to express his need.To resolve these problems, we propose, in this paper, a new approach that provides a contextual and semantic information retrieval system.Our proposed system is based firstly on NNMF (Non-negative Matrix Factorization) technique for data analysis in order to present textual data with new and small representations and to organize this data into categories. Secondly, our system try to how ameliorate the user’s need with new semantic keywords that keeping the same context of the original query, by exploiting obtained results by the used data analysis technique and the LSM method that defines semantic relationships between terms.Experimental results performed on the ClefEhealth-2014 database demonstrate the performance of our proposed approach on large scale text collections.

Nesrine Ksentini, Mohamed Tmar, Faïez Gargouri
Design and Simulation of Multi-band M-shaped Vivaldi Antenna

Vivaldi antenna is a co-planar broadband antenna which is made from a dielectric plate metalized on both sides. Double – sided printed circuit board used in the design of this type of antenna makes it cost effective at microwave frequencies exceeding 1 GHz. This type of antenna used in broadband of frequency specially ultra-wide band because its manufacturing is easy for which PCB production is used. In this paper, a new model of Vivaldi antenna been designed based on the dual Vivaldi with M-shaped, the design based on the standards and parameters for the antenna, the dimensions and size of the designed antenna given depending on the wavelength. After the design process, experimental setup done to obtain the practical parameters for the antenna. Optimization for the best VSWR versus operating frequencies, the design and simulation results presented like radiation pattern and the VSWR for many operating frequencies, comparison with the other Vivaldi antenna presented to show the improvements in the proposed and designed antenna, finally, some conclusions presented.

Jalal J. Hamad Ameen
Performance Evaluation of Openflow SDN Controllers

Software Defined Networks (SDN) is the recent networking paradigm being adopted by stakeholders in big way. The concept works towards dramatically reducing network deployment and management costs. Controllers, also known as Network Operating System of SDN, are critical to success of SDNs. Many open source controllers are available for use. In this paper, performance of four popular Openflow based controllers has been evaluated on various metrics. Latency, Bandwidth utilization, Packet Transmission rate, Jitter and Packet loss has been calculated for TCP and UDP traffic on varying network sizes, topologies and Controllers. Floodlight is one of the best performing as compared to reference controller.

Sangeeta Mittal
Monitoring Chili Crop and Gray Mould Disease Analysis Through Wireless Sensor Network

The purpose of this work is to design and develop an agricultural monitoring system using Wireless Sensor Network (WSN) to increase productivity and quality of chili farming remotely. Temperature and humidity levels are the most important factors for productivity, growth, and quality of chili plant in agriculture. It is necessary that these are to be observed all the time in real time mode. The farmers or the agriculture experts can observe the measurements through the website or an android app simultaneously. The system will be immediately intimated to the farmer in detection of any critical changes occurs in one of the measurements. Which would helps the farmer to know about the possible disease range. With the continuous monitoring of many environmental parameters, the grower can analyze an optimal environmental conditions to achieve maximum crop productiveness and to save remarkable energy.

Sana Shaikh, Amiya Kumar Tripathy, Gurleen Gill, Anjali Gupta, Riya Hegde
Intelligent AgriTrade to Abet Indian Farming

The present Indian agricultural system is embedded with advanced services like GPS, weather sensors, etc. which facilitate in communicating with each other, and analysis of the ground level real-time or near real data. Information and Communication Technology (ICT) provides services in the form of cloud to agricultural systems. Agriculture-Cloud and ICT offers knowledge features to farmers regarding ultra-modern farming, pricing, fertilizers, pest/disease management, etc. Scientists/Experts working at Agriculture research stations and extensions can add their findings, recommendations regarding up-to-date practices for cultivation, and related practices. In this work an attempt has been made to design and implement a simple Cloud based application on Agriculture System which is based on AgriTrade on cloud that will enrich agriculture production and also boost the accessibility of data related to field level investigation and also in laboratory. The impact of undertaking such a tool would cut the cost, time and will increase the agricultural production in a relatively much faster and easier way. The system is intelligent to tell the environment statistics to the farmer for improved approach towards agriculture.

Kalpita Wagaskar, Nilakshi Joshi, Amiya Kumar Tripathy, Gauri Datar, Suraj Singhvi, Rohan Paul
Evaluating the Efficiency of Higher Secondary Education State Boards in India: A DEA-ANN Approach

This study proposes the integration of two nonparametric methodologies - Data Envelopment Analysis and Artificial Neural Network for efficiency evaluation. The paper initially outlines the research work conducted in the education sector using DEA and ANN. Furthermore, the case study for the paper is conducted on various State Boards (which are used as DMU’s) in Indian Higher Secondary Education System for efficiency evaluation using DEA which is integrated with soft computing technique ANN in order to increase discriminatory power, ranking and future prediction. The above two methods are compared on their practical use as a performance measurement tool on a set of Indian State Boards in Indian Higher Secondary Education System with multiple inputs and outputs criteria. The results demonstrate that ANN-DEA Integration optimizes the performance and increases the discriminatory power and ranking of the decision making units.

Natthan Singh, Millie Pant
Design of Millimeter-Wave Microstrip Antenna Array for 5G Communications – A Comparative Study

Millimeter wave communication is found as a suitable technology for future 5G communications. The beamforming antenna is chosen to increase the link capacity considering the atmospheric losses at millimeter wave frequencies. This work compares the performance of different microstrip antenna arrays in terms of the return loss bandwidth, gain, half power beamwidth, side lobe level etc. The results are simulated using commercial electromagnetic software. A suitable array structure is suggested from the study.

Saswati Ghosh, Debarati Sen
Simulation Design of Aircraft CFD Based on High Performance Parallel Computation

This paper introduces the application of high performance parallel computing in CFD numerical simulation. Architecture and implementation method of the aircraft CFD simulation based on large scale and high performance are raised after analyzing the complex flow field of multi area parallel computing research.

Yinfen Xie
Determining the Optimum Release Policy Through Differential Evolution: A Case Study of Mula Irrigation Project

The present study shows an implementation of Differential Evolution (DE) algorithm for determining the optimum flow policy for the reservoir operation. The case study is done for Mula Major Irrigation Project for river Mula (Godavari basin), Ahmednagar district, Maharashtra. The problem is formulated in terms of an unconstrained optimization model having 12 variables as the data collected is for one year.

Bilal, Millie Pant, Deepti Rani
Characterising the Impact of Drought on Jowar (Sorghum spp) Crop Yield Using Bayesian Networks

Drought is a complex, natural hazard that affects the agricultural sector on a large scale. Although the prediction of drought can be a difficult task, understanding the patterns of drought at temporal and spatial level can help farmers to make better decisions concerning the growth of their crops and the impact of different levels of drought. This paper studied the use of Bayesian networks to characterise the impact of drought on jowar (Sorghum spp) crop in Maharashtra state on India. The study area was 25 districts on Maharashtra which were selected on the basis of data availability. Parameters such as rainfall, minimum, maximum and average temperature, potential evapotranspiration, reference crop evapotranspiration and crop yield data was obtained for the period from year 1983 to 2015. Bayes Net and Naïve Bayes classifiers were applied on the datasets using Weka analysis tool. The results obtained showed that the accuracy of Bayes net was more than the accuracy obtained by Naive Bayes method. This probabilistic model can be further used to manage and mitigate the drought conditions and hence will be useful to farmers in order to plan their cropping activities.

Shubhangi S. Wankhede, Leisa J. Armstrong
Linear Programming Based Optimum Crop Mix for Crop Cultivation in Assam State of India

The Assam, the gate way to the north east is an agrarian state with rice as the staple crop. Nearly 89% of the state’s population of about 3 crores lives in rural area. The productivity of the major crops like rice, pulses and oilseeds is yet to reach acceptable level despite various efforts in the past. Optimum crop mix for Assam has been developed using linear programming to maximize the net returns ensuring the best use of land and other natural resources for the state. Attempts have been made to obtain crop combination under rainfed and irrigated area separately in both kharif and Rabi season. The result from the linear programming based model indicates higher returns than the existing crop plan (Rs. 39.30 hundred crore in optimum plan over 34.16 hundred crore in existing plan at market price). Sensitivity analysis also indicated that the profit can be increased substantially by bringing the fallow land during Rabi season under irrigation. This implies that there is potential in bringing more land under double or triple cropping considering vast water availability in the state.

Rajni Jain, Kingsly Immaneulraj, Lungkudailiu Malangmeih, Nivedita Deka, S. S. Raju, S. K. Srivastava, J. P. Hazarika, Amrit Pal Kaur, Jaspal Singh
eDWaaS: A Scalable Educational Data Warehouse as a Service

The university management is perpetually in the process of innovating policies to improve the quality of service. Intellectual growth of the students, the popularity of university are some of the major areas that management strives to improve upon. Relevant historical data is needed in support of taking any decision. Furthermore, providing data to various university ranking frameworks is a frequent activity in recent years. The format of such requirement changes frequently which requires efficient manual effort. Maintaining a data warehouse can be a solution to this problem. However, both in-house and outsourced implementation of a dedicated data warehouse may not be a cost-effective and smart solution. This work proposes an educational data warehouse as a service (eDWaaS) model to store historical data for multiple universities. The proposed multi-tenant schema facilitates the universities to maintain their data warehouse in a cost-effective solution. It also addresses the scalability issues in implementing such data warehouse as a service model.

Anupam Khan, Sourav Ghosh, Soumya K. Ghosh
Online Academic Social Networking Sites (ASNSs) Selection Through AHP for Placement of Advertisement of E-Learning Website

Over the years the use of Social networking sites (SNSs) has grown tremendously. This popularity has led to the incessant need for marketers to concentrate advertising through these websites. The need of the hour is therefore to develop advertising strategies which are effective and efficient. With new SNS being launched each day, the important decision is to determine which SNS is the most appropriate for advertising for a firm. There are many forms and options of SNSs available with different purposes which allow the people to interact with each other such as social connection (facebook, google+, etc.), multimedia sharing (youtube, flickr, etc.), professional (linkedln, classroom 2.0, etc.), hobbies (on my bloom, pinterest, etc.), academic (researchgate, academic.edu, etc.), etc. Due to increase in the types of SNSs, the evaluation and selection of right SNS have become a complex problem for advertisers. In this research, the selection of Academic Social networking sites (ASNSs) is considered as a Multi Attribute Decision Making (MADM) problem for advertising of E-learning website. Analytical Hierarchy Process (AHP) methodology for the selection of ASNSs has been adopted. A real life case study is also presented to show the applicability of the proposed methodology.

Meenu Singh, Millie Pant, Arshia Kaul, P. C. Jha
Fingerprint Based Gender Identification Using Digital Image Processing and Artificial Neural Network

Every person has a unique fingerprint which can be used for identification. Fingerprints are widely used for forensic cases in criminal investigations. It would be useful to be able to distinguish fingerprints samples based on gender to reduce number of persons of interest in a criminal investigation. This paper discusses a system implemented for identification of gender based on fingerprints. Digital Image Processing and Artificial Neural Network (ANN) techniques were used to implement the gender identification system. Various preprocessing techniques such as cropping, resizing and thresholding were carried out on each image. Feature extraction was carried on each pre-processed image using Discrete Wavelet Transform (DWT) at 6 levels of decomposition. The extracted features were used for implementing ANN based on Back Propagation algorithm. Fingerprint images were sourced from publicly available online datasets. A data set of 200 images of left thumbprint, which included 100 male and female fingerprint images. A training set of 100 images was used for testing purpose, which included 50 male and female images respectively. The accuracy achieved for identifying fingerprints was found to be 78% and 82% for male and female samples respectively.

Mahendra Kanojia, Niketa Gandhi, Leisa J. Armstrong, Chetna Suthar
Indian Mobile Agricultural Services Using Big Data and Internet of Things (IoT)

Mobile services are now widely available to Indian farmers and other agriculture stakeholders. These services are providing a range of functionality from access to market prices and climate services. This paper presents an overview of various services available as mobile apps which are using Big Data and IoT technologies. This paper provides findings from a survey of mobile application services which are available to India agricultural sector stakeholders. The mobile applications where categorized in various service types and assessments made of the type of functionality provided and proportion of apps providing each service category. The paper also addresses the strengths and weaknesses of these services and draw conclusion sand the future trends in the availability of these mobile app services.

Pallavi Chatuphale, Leisa Armstrong
A Study of the Privacy Attitudes of the Users of the Social Network(ing) Sites and Their Expectations from the Law in India

In an era of information revolution and Web 2.0 technologies, the Social Network(ing) Sites (SNSs) have become a popular medium for the freedom of expression, the networking and maintaining the networks with the strangers and known others. A large amount of personal data is disclosed by the users intentionally or unknowingly on these social networking sites. The protection of this data at residence and in motion and its further processing by SNSs and their third parties is a cause of concern. In the present study, the attitude of Indian users of SNSs towards data privacy and their expectations from law in India have been explored and analyzed to validate the need for creation of a data privacy law in India. This study would provide timely guidance for policy makers who are currently engaged in framing a data protection framework on the directions of Supreme Court of India by following due process of law.

Sandeep Mittal, Priyanka Sharma
Backmatter
Metadaten
Titel
Intelligent Systems Design and Applications
herausgegeben von
Prof. Dr. Ajith Abraham
Pranab Kr. Muhuri
Dr. Azah Kamilah Muda
Dr. Niketa Gandhi
Copyright-Jahr
2018
Electronic ISBN
978-3-319-76348-4
Print ISBN
978-3-319-76347-7
DOI
https://doi.org/10.1007/978-3-319-76348-4