Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the Second International Conference on Information, Communication and Computing Technology, ICICCT 2017, held in New Delhi, India, in May 2017.

The 29 revised full papers and the 5 revised short papers presented in this volume were carefully reviewed and selected from 219 submissions. The papers are organized in topical sections on network systems and communication security; software engineering; algorithm and high performance computing.

Inhaltsverzeichnis

Frontmatter

Network Systems and Communication Security

Frontmatter

CbdA: Cloud Computing Based DMIS on ANEKA

A cloud computing application possesses the ability to have its processing tasks divided into small independent work units. These work units (in the form of tasks, threads etc.) are made to execute in parallel with complete independence from each other over the cloud. The proposed disaster management information system (DMIS) can be potentially used by amateurs, bureaucrats, technocrats and professionals working in the field or in the disaster affected area to handle any record based information, archive it, analyze it and use it even for forecasting the occurrence of disaster. Designing and deploying such an application over the cloud have given DMIS benefit of wide reach coupled with high performance, high reliability and better security.

Shweta Meena, Rahul Johari, John Sushant Sundharam, Kalpana Gupta

Design of Task Scheduling Model for Cloud Applications in Multi Cloud Environment

Task Scheduling is important part in cloud computing environment for heterogeneous resources. Task scheduling is to allocate tasks to the best suitable resources to increase performance in terms of some dynamic parameters. The proposed scheduling model is constructed for cloud applications in multi cloud environment and implemented in three phases (minimization, grouping & ranking and execution) and considered average waiting time, average turnaround time, completion time and makespan as performance parameters. In this scheduling model, execution time of tasks in cloud applications is generated through normal distribution and exponential distribution. Ranking of tasks is based upon shortest job first strategy (SJF) and results are compared with other ranking method based upon first come first serve (FCFS) and largest processing time first (LPTF). The proposed scheduling model gives better performance as per defined performance parameters.

P. K. Suri, Sunita Rani

Analysis of Epidemic Outbreak in Delhi Using Social Media Data

A Social media generates a vast amount of data related to epidemic outbreak every year. Data produced by social media platform such as Twitter for health surveillance applications is exponentially increasing. Chikungunya and Dengue are taking the toll on Delhi in the year 2016 and mining twitter data reflects the status of Chikungunya and Dengue outbreak in Delhi. In this paper, the tweets extracted from twitter over a time period using epidemic - related keyword are classified using a supervised classification technique called Naïve Bayes classifier with manual tagging feature into relevant epidemic - related tweets with 90% accuracy. The relevant tweets classified are enumerated for analyzing the spread and estimating the most affected month during the outbreak and compare it with the health statistics.

Sweta Swain, K. R. Seeja

Erasure-Coded Network Backup System (ECNBS)

Traditionally File Servers are used as storage medium as Network Attached Storage. However the need for backup servers is a necessity. As a rule of thumb, data in a network environment needs to be replicated on 3 different machines. The amount of data replicated and other issues such as consistency and concurrency maintenance becomes a huge overhead. As a solution to those issues, Erasure Coded Storage is tipped to be the next best alternative. In Erasure Coded Storage, an object (file) is broken down into n blocks, and encoded. Furthermore some redundant parity blocks are created using mathematical formulas, so that redundancy could be provided. In case of loss of original blocks of data, any parity block can be used for recovery. There exist several Erasure Code Techniques, namely Reed-Solomon, Hierarchical, Self-Repairing and Regenerating codes. Each code has its specificity, some look into diminishing bandwidth consumption, while other lesser computational loads. Till date, there are only a few implementations based on erasure codes. Our contribution is that we are proposing a novel architecture for Network Backup System using Erasure Codes in ECNBS that includes three layers. The Interface Layer presents the user with a layout of files similar to windows environment, whereby files are stored in folders and subfolders. The Intermediate Layer (Mapping Layer) stores information about the files and the locations of the related blocks. The Storage Layer is where blocks of data are physically stored. The newly implemented system is fully functional, and has also been compared to the traditionally used File Server.

Aatish Chiniah, Jellina Aishwarta Devi Dhora, Chaahana Juhinee Sandooram

Morphological Analysis and Synthesis of Manipuri Verbs Using Xerox Finite-State Tools

One of the basic components of any natural language processing applications is the morphological analyzer where the analyzer produces the constituent morphemes of a given word. As all the world languages have its own unique morphological features, approaches to analyze each of these languages may vary. Manipuri, a Tibeto-Burman language, is an agglutinative language. Its verbal morphology is considered to be very rich and complex because of its features on morphosyntax, morphophonemic alterations, long distance dependency, reduplication, etc. Developing a morphological analyzer and generator of words for such a language is a challenging task especially when a standard documentation on the grammar and spelling rules for this language is not available. This paper presents the morphological analysis of Manipuri verbs using the finite-state techniques and tools and shows how the same analyzer can be used to generate/synthesize words with given verb roots and probable lexical tags.

Ksh. Krishna B. Singha

Bi-objective Cross-Layer Design Using Different Optimization Methods in Multi-flow Ad-Hoc Networks

In this paper, we employ three different optimization methods to solve a bi-objective cross-layer design problem in ad-hoc networks. These methods are active-set, interior-point and sequential quadratic programming (SQP) methods. Specifically, we formulate the proposed problem as a nonlinear optimization problem subject to the underlying network operating conditions. The two objectives considered for optimization include minimizing the aggregate link powers and maximizing the overall network utility. For this, two different ad-hoc topology scenarios with multi-flow network design are used to implement the proposed problem. The simulation results demonstrate the convergence process of various iterative methods to their optimal solutions within finite number of iterations. The maximum convergence rate achieved in our scheme is as high as 76.5%.

Ridhima Mehta, D. K. Lobiyal

Power Analysis of a Network Using DECAP Algorithm by Varying Network Sizes and Users

Green Networking is of recent interest. Network devices like access points, switches, computer, server etc. are the major source of energy consumption. To eliminate the problem of energy wastage of Access Points switched on all the time an algorithm is introduced. In this paper, an algorithm is developed to save energy and to avoid wastage of energy due to continuously ON Access Point switches. This algorithm is applied to the centralized controller, which maintains the whole network information and configuration of Access Point, Switches, and Clients etc. A log file is also maintained to keeps track of the Sleep mode timing of the Access Points. Controller is a device that stores the information by linking the client, cluster head, secondary Access Points simultaneously. Communication between all modules (client, cluster head, secondary AP) is done by sending the packets from source to destination. In order to validate the algorithm, experiment has been carried out in Banasthali Vidyapith by varying network size, users, number of Access Points deployed etc. The presented approach works in two phase’s first is the association phase and second one is the disassociation phase. From the results, it is inferred that the proposed algorithm saves large amount of energy.

Anup Bhola, C. K. Jha

Green Communication: An Emerging Telecommunication Technology-Its Research Challenges, Techniques and Applications

Today human life without telecommunication is beyond belief. The technology development has an exponential growth that proportionally increases with the rate of the telecommunication usage over the recent years and will keep growing to connect all individual entities either through a wired (wireless) media. However, the increasing demand, constant development, rapid expansion for continuous production of new and advance devices has significant effect on the global environment in terms of energy consumption, life-loss due to radiation effects, biological changes and disappearance of living species etc. The most prevalent aforementioned issues have motivated the research community towards green communication. This review paper elaborates on the concepts of the green communication providing a clear insight on its research challenges, techniques adopted to have a green future.

Sasi Kiran Sajja, Padmavathy N.

A 1.25 THz High Gain Hybrid Circuit with CNT Model Performance Optimization for Radar Sensors Application

This work focuses on a novel approach of hybrid circuit topology with carbon nanotube (CNT) model which provides higher gain at 1.25 THz for radar sensors application. A single walled CNT provides RF circuit model and demonstrates its ability to resonant at terahertz frequencies. The hybrid structure provides a circuit topology which achieves wide impedance bandwidth of 0.33 THz within range of 1.07 to 1.42 THz. A transmission line radiator used as a compensator in order to cancel parasitic capacitance and achieves more than 30 dB forward gain. A minimum noise figure of 0.4 dB is also achieves with tuning effects for inductor of RF circuit model at 1.25 THz. The whole circuit topology is implemented in advanced design system with RF simulator using 45 nm predictive technology model. This fascinating work approach first attempts to make circuit topology at THz for parameters of soil measurement application.

Sandeep Kumar, Van-Ha Nguyen, Won-young Jung, Hanjung Song

Image Based Password Composition Using Inverse Document Frequency

Password remains one of the main authentication methods in today’s time. The challenge with password as authentication system is due to its dependency on humans. Its strength and weakness is decided by the alphanumeric string set by users. A number of the websites demand users to use strong passwords even though they do not safeguard any critical information assets. This results in an unnecessary cognitive burden on users. It can be reduced by minimizing the number of strong passwords that he/she has to remember. A password composition scheme that considers criticality of information asset is required for this purpose. This article presents one such scheme using inverse document frequency. Users are authenticated based on a valid English sentence. Sentences leave alphanumeric strings behind in recall due to their semantic nature. Users select their authentication sentence by using an image as context. Humans are good at recalling context based information.

K. P. Arun, Arun Mishra

Dynamic Threshold-Based Dynamic Resource Allocation Using Multiple VM Migration for Cloud Computing Systems

As compared to traditional distributed computing systems, cloud computing systems are more reliable, dynamic, and scalable. In recent trend the challenge is managing the resources to maintain the scalability in dynamic environment. The need is to improve the performance of cloud computing systems by provisioning and allocation of on-demand resources to reduce the time. Some of the existing methods are based on static parameters such as CPU utilization threshold, resources, and workload that give less efficient results and there is lack in handling the over-provisioning and under-provisioning situations. In this paper we propose resource allocation model on the basis of dynamic parameters. The proposed method, dynamic threshold-based dynamic resource allocation can optimize the resource utilization and time. The proposed model is implemented on CloudSim and experimental results show the proposed model can improve resource utilization and time.

Sonam Seth, Nipur Singh

3D Weighted Centroid Localization Algorithm for Wireless Sensor Network Using Teaching Learning Based Optimization

The purpose of this paper is to improve the localization accuracy of range-free algorithm in three-dimensional (3D) space for wireless sensor networks (WSNs). In this paper, weighted centroid localization algorithm using teaching learning based optimization (WCL-TLBO) is proposed to improve the positioning accuracy in 3D space of WSN. In range-free algorithms, only received signal strength (RSS) information between nodes is sufficient to determine the position of target nodes. RSS value gives the clue to find out the distance between sensor nodes but shows non-linearity between RSS and distance. To overcome this non-linearity, fuzzy logic system (FLS) is used. Edge weights of WCL are modelled using FLS. Further to reduce the errors, TLBO is applied to optimize these edge weights. Simulation results establish the superiority of the proposed algorithm in terms of localization accuracy to other existing range-free algorithms in similar scenario.

Gaurav Sharma, Ashok Kumar

An Approach to Build a Sentiment Analyzer: A Survey

With the increase in the use of social networking sites like Twitter anyone can share or express his or her views with each other on a common stage. Twitter sentiment analyzer is a tool which is used to find out whether a corpus of data is positive, negative or neutral. Our work focuses on the steps involved in this Opinion Mining problem necessary to fetch opinions out of a corpus. We also aim to look at the strengths and scope for future research in the field of Twitter sentiment analyzer.

Singh Dharmendra, Bhatia Akshay, Singh Ashvinder

Malicious PDF Files Detection Using Structural and Javascript Based Features

Malicious PDF files recently considered one of the most dangerous threats to the system security. The flexible code-bearing vector of the PDF format enables to attacker to carry out malicious code on the computer system for user exploitation. Many solutions have been developed by security agents for the safety of user’s system, but still inadequate. In this paper, we propose a method for malicious PDF file detection via machine learning approach. The proposed method extract features from PDF file structure and embedded JavaScript code that leverage on advanced parsing mechanism. Instead of looking for the specific attack inside the content of PDF i.e. quite complex procedure, we extract features that are often used for attacks. Moreover, we present the experimental evidence for the choice of learning algorithm to provide the remarkably high accuracy as compared to other existing methods.

Sonal Dabral, Amit Agarwal, Manish Mahajan, Sachin Kumar

A System Architecture for Mapping Application Data into Complex Graph

Abundance of interrelated data from heterogeneous sources in different formats is generated by applications which can be unified and modeled using complex graphs. Complex graphs provide a natural way of modeling relationships among entities which is an important aspect for today’s applications like social network, biological network etc. Therefore, we have proposed an architecture which can be used for transforming application data to graph and subsequently utilized by various subsystems of the application. We have also incorporated various software engineering properties like maintainability, trustworthiness, robustness etc. in the architecture. In addition, architecture prototype for student course management system has been implemented using academic data set.

Sonal Tuteja, Rajeev Kumar

Ontology-Driven Shopping Cart and Its Comparative Analysis

Design Patterns are formal solutions to commonly occurring design problems in software development. Compound Patterns are amalgamation of two or more individual Design Patterns. In spite of having advantages, they have certain limitations in the form of coupling, increased dependency between class hierarchies and several others. In this paper we present a comparative analysis between the ontology driven Compound Pattern and their classical GOF (Gang of Four) version. We have implemented a shopping cart application based on a compound pattern comprising visitor, observer and strategy Patterns. We, then implemented the same application based on ontology driven version of the compound pattern and then compared the two designs. We found that the ontological approach has certain edges over the former one in terms of modifiability and maintainability. The application based on ontology driven design pattern even adapts to some changes made during the runtime. The end users can make changes in the ontology driven application and thus change its behavior altogether during the run time. We conducted some modifications during non-runtime conditioned in both applications and got convincing results. We noted that the time and effort required to extend the application is much less in ontology version than in the GOF version.

Aditya Vardhan, Amrita Chaturvedi

Security of Web Application: State of the Art

Research Theories and Industrial Practices

As complexity inherent in web application is growing rapidly. Testing web applications with more sophisticated approaches is essentially needed. Several approaches for security testing are available, but only a few of them are appreciated in common IT industries and hence in practice. The paper recapitulates the current approaches, considering the limitations of real world applications. An effort has been made in the direction of bridging the gaps with the study of foremost web security concerns and the current web testing techniques, including their strengths and weaknesses. The paper highlights the security issues pertinent to web applications, along with actual practices in industries related to these issues. It also includes gap between practices and theories in the industry.

Habib ur Rehman, Mohammed Nazir, Khurram Mustafa

Ring Segmented and Block Analysis Based Multi-feature Evaluation Model for Contrast Balancing

Image capturing in different indoor and outdoor environment requires high quality and sensing camera devices. Image capture in fog, night, rainy atmosphere, etc., can face an unequal contrast problem. Visibility is the primary concern for any image processing application to extract the content information and features accurately. In this paper, a ring segment based block feature evaluation method is provided to setup the enhancement individually in each segmented region. In this model, an intelligent method is applied to raw image to locate the regions with extreme visibility difference. The ring specific geographical mapping is applied to locate these regions. Three blocks from the region are evaluated based on visibility, entropy and frequency parameters. The comparative evaluation on block content strength is applied to get the referenced block blocks with maximum containment. Finally, each region block is mapped to this reference block to stabilize the contrast unbalancing. The proposed method is applied in real time captured images with different lighting effects. The comparative evaluation against histogram equalization method is applied for the PSNR and MSE parameters. The evaluation results show that the proposed method enhanced the visible quality and error robustness of dark, dull and faded images.

Kapil Juneja

A Scheme of Visual Object Tracking for Human Activity Recognition in Social Media Analytics

Human action recognition is an open challenge in computer vision area. It means recognizing the action of human in a video. It can be divided into two steps, first step is extracting feature from video and second step is to use a classifier to find tag for the action like jump, walk, sit, hand waving etc. It is very challenging task due noise, occlusion, motion blur, and camera movement. Actions are performed by a single person or more than one person at a time. Activity recognition now a days is having a lot importance due to its many of the advantages like surveillance systems at airport, patient monitoring system, care of elderly people etc. are very few to mention. In this work we proposed Ohta color space along with RGB channel which used with LBP texture. We are extracting texture feature with the help of local binary pattern. It is used five major non uniform rotational invariant LBP and Ohta along with RGB color for the representation of the target. This scheme successfully extract the color, edge and corner information. Our proposed method maintains a trade-off between exactness of object detection and ensure that it is faster. The fusion of rotationally invariant LBP and Ohta color features make this scheme useful in object tracking for specific human activity recognition. The fused features are tested by mean shift tracker.

Naresh Kumar

DNA Based Cryptography

A new emerging research topic in the field of Information storage, security and cryptography is DNA based cryptography. DNA is known to carry information from one generation to other and is turning out to be very promising for cryptography. The storage capacity, vast parallelism of DNA are used for cryptographic purposes. In this paper, we will talk about progress of DNA cryptography, discuss DNA computing, and propose a method for DNA based cryptography through 3 phases where we will first encrypt the plaintext through our proposed encryption algorithm and then prepare a desired DNA sequence ready to send.

Archana Gahlaut, Amit Bharti, Yash Dogra, Puneet Singh

A Noise Robust VDD Composed PCA-LDA Model for Face Recognition

Face recognition is most widely used biometric realization approach implemented as online, offline and mobile services. Noise is one major disturbing factor that misrepresents the image and obscures the facial features. Image reformation aims to fix the noise impact and enhance the recognition rate. In this paper, a dual mode VDD (Vector-Directional-Distance) impainting method is presented to preserve the image content. VDD filter is composition of Vector-Directional and Directional-Distance filter. The rectified image is processed under Gabor filter to generate the robust structural and textural feature. The featured image is mapped under PCA and LDA classifiers. The method is implemented on salt & pepper, Gaussian and Poisson Noise included Utrecht ECVP and Stirling datasets. The comparative observation is taken against the Median filter, Gaussian Filter and Morphological Filter rectified PCA and LDA methods. The comparative observation identified the significance of VDD filter over other face rectification methods considered in this work. The evaluation results show that VDD-composed PCA-LDA method has enhanced the accuracy of face recognition for noise disturbed images.

Kapil Juneja

Software Engineering

Frontmatter

Extending AHP_GORE_PSR by Generating Different Patterns of Pairwise Comparison Matrix

Software requirements selection and prioritization (SRSP) is an important process of software requirements elicitation and analysis methods like AGORA, PRFGORE process, GOASREP, AHP_GORE_PSR, etc. In these methods, pairwise comparison matrices (PCM) are constructed by the decision makers (DM) for the selection and prioritization of the software requirements. In these methods, consistency of the PCM plays an important role to decide whether the preferences given by the DM for the software requirements are consistent or not. If the PCM will not be consistent then it may lead to failure of software system. Based on our literature review, we identify that, less attention is given in SRSP methods to check the consistency of the PCM. Therefore, in order to address this issue, we extend the AHP_GORE_PSR by generating different patterns of PCM. Finally, we explain the proposed method by considering the example of Institute Examination System.

Mohd. Sadiq, Arashiya Afrin

Analysis of Errors in Safety Critical Embedded System Software in Aerial Vehicle

A study and analysis of software errors has been carried out. The present study analyzed approximately 400 errors of safety critical Navigation system software in 10 project variants. Framework used was: (i) Identification of types and severity of errors (ii) Gathering errors from error database and categorizing them according to the type and severity (iii) Recognition of the pattern and deriving the relation to software change rationale (iv) Proposing the error prevention guidelines for software requirements specification and code implementation for future software projects as a part of Software Quality Assurance (SQA) process. The main aim is to propose the error prevention guidelines that are practical to be implemented despite the project schedules. This is considered valuable outcome of the Independent Verification and Validation (IV&V) effort carried out for this safety critical embedded software.

Lakshmi K.V.N.S., Sanjeev Kumar

Method to Resolve Software Product Line Errors

Feature models (FMs) are of utmost importance when representing variability in Software Product Line (SPL) by focusing on the set of valid combinations of features that a software product can have. FMs quality is one of the factors that impacts the quality of SPL. There are several types of errors in FMs that reduces the benefits of SPL. Although FM errors is a mature topic but it has not been completely achieved yet. In this paper, disparate studies for the FM errors in SPL are summarized and a method based on rules to fix these errors is proposed and explained with the help of case studies. The results of evaluation with FMs up to 1000 features show the scalability and accuracy of the given method which improves the SPL quality.

Megha, Arun Negi, Karamjit Kaur

Test Case Prioritization Based on Dissimilarity Clustering Using Historical Data Analysis

Test case prioritization reorders test cases based on their fault detection capability. In regression testing when new version is released, previous versions’ test cases are also executed to cross check the desired functionality. Historical data ensures the previous fault information, which would lead the potential faults in new version. Faults are not uniformed in all software versions, where similar test cases may stack in same faults. Most of the prioritization techniques are either similar coverage based or requirements clustering, where some used historical data. However, no one incorporate dissimilarity and historical data together, which ensure the coverage of various un-uniformed faults. This paper presents a prioritization approach based on dissimilarity test case clustering using historical data analysis to detect various faults in minimum test case execution. Proposed scheme is evaluated using well established Defects4j dataset, and it has reported that dissimilarity algorithm performs better than untreated, random and similarity based prioritization.

Md. Abu Hasan, Md. Abdur Rahman, Md. Saeed Siddik

Algorithm and High Performance Computing

Frontmatter

A Fast GPU Based Implementation of Optimal Binary Search Tree Using Dynamic Programming

Modern GPUs (Graphics processing units) can perform computation at a very high rate as compared to CPU’s; as a result they are increasingly used for general purpose parallel computation. Parallel algorithms can be developed for GPUs using different computing architectures like CUDA (compute unified device architecture) and OpenCL (Open Computing Language). Determining Optimal Binary Search Tree is an optimization problem to find the optimal arrangement of nodes in a binary search tree so that average search time is minimized. A Dynamic programming algorithm can solve this problem within O(n3)-time complexity and a workspace of size O(n2). We have developed a fast parallel implementation of this O(n3)-time algorithm on a GPU. For achieving the required goal we need to provide data structures suitable for parallel computation of this algorithm, besides we need to efficiently utilize the cache memory available and to minimize thread divergence. Our implementation executes this algorithm within 114.4 s for an instance containing 16384 keys on an NVidia GTX 570, while a conventional CPU based implementation takes 48166 s to execute. Thus, a speed up factor of 422 compared to a conventional CPU based implementation is obtained.

Mohsin Altaf Wani, Manzoor Ahmad

A Hybridized Evolutionary Algorithm for Bi-objective Bi-dimensional Bin-packing Problem

The bin-packing problem is a widely studied combinatorial optimization problem. In classical bin-packing problem, we are given a set of real numbers in the range (0,1] and the goal is to place them in minimum number of bins so that no bin holds more than one. In this paper we consider a bi-dimensional bin-packing in which we are given a set of rectangular items to be packed into minimum number of fixed size of square bins. Here we consider two objectives applied on a bi-dimensional variant, one is related to minimize number of bins and second is minimize average percentage of wastage/gaps in bins. To solve this problem we incorporate the concept of Pareto’s optimality to evolve set of solutions using evolutionary algorithm (EA) tool hybridized with the heuristic operator leading to improve results from existing techniques.

Neeraj Pathak, Rajeev Kumar

Protein Features Identification for Machine Learning-Based Prediction of Protein-Protein Interactions

The long awaited challenge of post-genomic era and systems biology research is computational prediction of protein-protein interactions (PPIs) that ultimately lead to protein functions prediction. The important research questions is how protein complexes with known sequence and structure be used to identify and classify protein binding sites, and how to infer knowledge from these classification such as predicting PPIs of proteins with unknown sequence and structure. Several machine learning techniques have been applied for the prediction of PPIs, but the accuracy of their prediction wholly depends on the number of features being used for training. In this paper, we have performed a survey of protein features used for the prediction of PPIs. The open research challenges and opportunities in the area have also been discussed.

Khalid Raza

Load Balancing Task Scheduling Based on Variants of Genetic Algorithms: Review Paper

In the cloud platform, basic aim of task scheduling based algorithm is to reduce the makespan of the task and to minimize the load on resources. Users are working on large amount and the resources to be used are not up to the requirements. So, to fulfill global performance, cloud computing is used. Stated problem has been improved up to some extent by variant of genetic algorithm like JLGA (Job Spanning Time and Load Balancing and GA) and MPGA. Therefore, here it reviewed variant of genetic algorithm and did comparison among them. Furthermore, this paper proposed HJLGA (Hybridization of good features of JLGA and MPGA) which uses min-min algorithm to initialize population and also brings the concept of hill climbing to find the best fitted value. HJLGA satisfied the requirements of load balancing, least cost and minimum makespan of nodes.

Ayushi Harkawat, Shilpa Kumari, Poorva Pharkya, Deepak Garg

Deep Learning Model Schemes to Address the Scrutability and In-Memory Purchase Issues in Recommender System

Recommender systems are used widely in each and every web services to recommend and suggest user interested information or products. The advancement of digital information era and pain in traversing the vast amount of information made users to realise the importance of intelligent recommender systems. Intelligent recommender system suffers over many issues some among them are scrutability, scalability and in-memory purchase problem. In-memory purchase problem is one of the currently persisting recommender system problems. In this paper the deep learning models based recommendation scheme are framed to address the scrutability and in-memory purchase issue of the recommender system. Initially the working of deep learning models like feed forward and recurrent neural network are discussed. Then the recommendation schemes based on the deep learning models in addressing the issues of the recommender system are detailed.

J. Sharon Moses, L. D. Dhinesh Babu

A Survey of Design Techniques for Conversational Agents

A conversational agent also referred to as chatbot is a computer program which tries to generate human like responses during a conversation. Earlier chatbots employed much simpler retrieval based pattern matching design techniques. However, with time a number of new chatbots evolved with an aim to make it more human like and hence to pass the Turing test. Now, most of the chatbots employ generative knowledge based techniques. This paper will discuss about various chatbot design techniques, classification of chatbot and discussion on how the modern chatbots have evolved from simple pattern matching, retrieval based model to modern complex knowledge based models. A table of major conversational agents in chronological order along with their design techniques is also provided at the end of the paper.

Kiran Ramesh, Surya Ravishankaran, Abhishek Joshi, K. Chandrasekaran

Fault Tolerant Control, Artificial Intelligence and Predictive Analytics for Aerospace Systems: An Overview

An aircraft or spacecraft represents a highly complex engineering system. The requirements of high performance and increased autonomous capability have necessitated the use of fault tolerant control, artificial intelligence and predictive analytics in aerospace systems. The paper presents an overview of the current state of art in this area with a focus on the work done by the authors. Model-based fault tolerant control methods are considered for spacecraft systems while data driven prognostics is reviewed for fault prediction of aircraft engine failures. The data driven methods including artificial intelligence show promising results for applications to aerospace systems.

Krishna Dev Kumar, Venkatesh Muthusamy

An Optimal Multi-view Ensemble Learning for High Dimensional Data Classification Using Constrained Particle Swarm Optimization

Multiple views of a dataset are constructed using the Feature Set Partitioning (FSP) methods for Multi-view Ensemble Learning (MEL). The way of partitioning of features influences the classification performance of MEL. The possible numbers of features set partition of the dataset are equal to the Bell number, which is in polynomial nature and a NP-hard problem (Shown in Fig. 1). It is essential to find an optimal classification performance of MEL for a features set partition among all possible features set partition in high dimension scenario. Therefore, an optimal multi-view ensemble learning approach using constrained particle swarm optimization method (OMEL-C-PSO) is proposed for high dimensional data classification. The experiments have been performed on ten high dimensional datasets. Using four exiting features set partitioning methods; the result shows that OMEL-C-PSO approach is feasible and effective for high dimensional data classification.

Vipin Kumar, Sonajharia Minz

Identifying Metaphors Using Fuzzy Conceptual Features

Metaphor comprehension is a challenging problem which equally intrigues researchers in linguistics as well as those working in the domain of cognition. The use of psychological features such as Imageability and Concreteness has been shown to be effective in identifying metaphors. However, there is a certain degree of vagueness and blurring boundaries between the sub-concepts of these features that has hitherto been largely ignored. In this paper, we tackle this issue of vagueness by proposing a fuzzy framework for metaphor detection whereby linguistic variables are employed to express psychological features. We develop a Mamdani Model to extract fuzzy classification rules for detecting metaphors in a text. The results of experiments conducted over a dataset of nominal metaphors reveal encouraging results with an F-score of more than 79%.

Sunny Rai, Shampa Chakraverty, Devendra K. Tayal

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise