Skip to main content

2019 | Buch

Software Engineering

Proceedings of CSI 2015

herausgegeben von: Prof. Dr. M. N. Hoda, Prof. Dr. Naresh Chauhan, Prof. Dr. S. M. K. Quadri, Prof. Dr. Praveen Ranjan Srivastava

Verlag: Springer Singapore

Buchreihe : Advances in Intelligent Systems and Computing

insite
SUCHEN

Über dieses Buch

This book presents selected proceedings of the annual convention of the Computer Society of India. Divided into 10 topical volumes, the proceedings present papers on state-of-the-art research, surveys, and succinct reviews. They cover diverse topics ranging from communications networks to big data analytics, and from system architecture to cyber security. This book focuses on Software Engineering, and informs readers about the state of the art in software engineering by gathering high-quality papers that represent the outcomes of consolidated research and innovations in Software Engineering and related areas. In addition to helping practitioners and researchers understand the chief issues involved in designing, developing, evolving and validating complex software systems, it provides comprehensive information on developing professional careers in Software Engineering. It also provides insights into various research issues such as software reliability, verification and validation, security and extensibility, as well as the latest concepts like component-based development, software process models, process-driven systems and human-computer collaborative systems.

Inhaltsverzeichnis

Frontmatter
A Multi-agent Framework for Context-Aware Dynamic User Profiling for Web Personalization

Growing volume of information on World Wide Web has made relevant information retrieval a difficult task. Customizing the information according to the user interest has become a need of the hour. Personalization aims to solve many associated problems in current Web. However, keeping an eye on user’s behavior manually is a difficult task. Moreover, user interests change with the passage of time. So, it is necessary to create a user profile accurately and dynamically for better personalization solutions. Further, the automation of various tasks in user profiling is highly desirable considering large size and high intensity of users involved. This work presents an agent-based framework for dynamic user profiling for personalized Web experience. Our contribution in this work is the development of a novel agent-based technique for maintaining long-term and short-term user interests along with context identification. A novel agent-based approach for dynamic user profiling for Web personalization has also been proposed. The proposed work is expected to provide an automated solution for dynamic user profile creation.

Aarti Singh, Anu Sharma
Implementation of Equivalence of Deterministic Finite-State Automation and Non-deterministic Finite-State Automaton in Acceptance of Type 3 Languages Using Programming Code

An automaton is used where information and materials are transformed, transmitted, and utilized for performing some processes without direct involvement of human. A finite automaton (both deterministic and non-deterministic) is used to recognize a formal regular language. In this paper, we will represent the acceptance of a formal regular language (Type 3 according to Noam Chomsky) by both deterministic and non-deterministic automaton. We are using a simple algorithm to implement finite-state automaton in a middle-level language and demonstrate that deterministic finite-state automation provides a best and unique solution in comparison to non-deterministic finite-state automaton. More important that if a problem solved by Non-deterministic finite state automation can be easily solved using equivalent deterministic finite state automaton.

Rinku, Chetan Sharma, Ajay Kumar Rangra, Manish Kumar, Aman Madaan
A Multi-factored Cost- and Code Coverage-Based Test Case Prioritization Technique for Object-Oriented Software

Test case prioritization is a process to order the test cases in such a way that maximum faults are detected as earlier as possible. It is very expensive to execute the unordered test cases. In the present work, a multi-factored cost- and code coverage-based test case prioritization technique is presented that prioritizes the test cases based on the percentage coverage of considered factors and code covered by the test cases. For validation and analysis, the proposed approach has been applied on three object-oriented programs and efficiency of the prioritized suite is analyzed by comparing the APFD of the prioritized and non-prioritized test cases.

Vedpal, Naresh Chauhan
A Novel Page Ranking Mechanism Based on User Browsing Patterns

Primary goal of every search engine is to provide the sorted information according to user’s need. To achieve this goal, it employs ranking techniques to sort the Web pages based on their importance and relevance to user query. Most of the ranking techniques till now are either based upon Web content mining or link structure mining or both. However, they do not consider the user browsing patterns and interest while sorting the search results. As a result of which, ranked list fails to cater the user’s information need efficiently. In this paper, a novel page ranking mechanism based on user browsing patterns and link visits is being proposed. The simulated results show that the proposed ranking mechanism performs better than the conventional PageRank mechanism in terms of providing satisfactory results to the user.

Shilpa Sethi, Ashutosh Dixit
Indexing of Semantic Web for Efficient Question Answering System

Search engine is a program that performs a search in the documents for finding out the response to the user’s query in form of keywords. It then provides a list of web pages comprising of those keywords. Search engines cannot differentiate between the variable documents and spams. Some search engine crawler retrieves only document title not the entire text in the document. The major objective of Question Answering system is to develop techniques that not only retrieve documents, but also provide exact answers to natural language questions. Many Question Answering systems developed are able to carry out the processing needed for attaining higher accuracy levels. However, there is no major progress on techniques for quickly finding exact answers. Existing Question Answering system is unable to handle variety of questions and reasoning-based question. In case of absence of data sources, QA system fails to answer the query. This paper investigates a novel technique for indexing the semantic Web for efficient Question Answering system. Proposed techniques include manual constructed question classifier based on <Subject, Predicate, Object>, retrieval of documents specifically for Question Answering, semantic type answer extraction, answer extraction via manually constructed index for every category of Question.

Rosy Madaan, A. K. Sharma, Ashutosh Dixit, Poonam Bhatia
A Sprint Point Based Tool for Agile Estimation

In agile environment, the software is developed by self-organizing and cross-functional teams. Agile promotes ad hoc programming, iterative, and incremental development and a time-boxed delivery approach. Agile is always flexible to changes. Estimation approaches of agile are very different from traditional ones. However, research involving estimation in agile methodologies is considerably less advanced. This paper focuses on the estimation phase of agile software development which probably ranks as the crucial first step. Poor decisions related to estimation activity can cause software failures. In agile environment in order to support estimation, some delay-related factors are proposed that can delay the release date of the project and also a new Sprint-point based estimation tool (SPBE) is designed and developed in excel. The proposed tool is based on the Sprint-point based Estimation Framework and place major emphasis on accurate estimates of effort, cost, and release date by constructing detailed requirements as accurately as possible.

Rashmi Popli, Naresh Chauhan
Improving Search Results Based on Users’ Browsing Behavior Using Apriori Algorithm

World Wide Web (WWW) is decentralized, dynamic, and diverse. It is growing exponentially in size. To improve search results, various ranking methods are being used. Due to vast information on the Web, there is a need to build an intelligent technique that automatically evaluates Web pages that are of user interest. In this paper, interest of a user in a particular Web page can be estimated by his browsing behavior without incurring additional time and effort by the user. It can also adapt to changes in user’s interests over time. A page ranking mechanism is being proposed which takes user’s actions into account. For this, a Web browser has been developed to store user’s behavior. Apriori algorithm is applied on the data collected by Web browser which results in most frequent actions out of all actions. A calculated confidence value has been used to calculate weight of the Web page. Higher the weight, higher the rank.

Deepika, Shilpa Juneja, Ashutosh Dixit
Performance Efficiency Assessment for Software Systems

Software quality is a complex term. Various researchers have various different views for defining it. One common point in all is that quality is required and is indispensable. It should not only be able to meet the customer requirements but should exceed it. Customers not only mean the external customers but the internal ones too. Performance efficiency characteristic is one of the vital software quality characteristics. If we improve the performance efficiency, then it will definitely have a positive effect on the software quality. In this paper, we have identified various sub-characteristics that can affect the performance efficiency of the software and proposed a performance efficiency model. We assessed the performance efficiency of the software systems by using one of the multi-criteria decision making (MCDM) methods, namely analytical hierarchy process (AHP). Results suggest that the proposed model is consistent and may be used for comparing the software system.

Amandeep Kaur, P. S. Grover, Ashutosh Dixit
Impact of Programming Languages on Energy Consumption for Sorting Algorithms

In today’s scenario, this world is moving rapidly toward the global warming. Various experiments are performed, to concentrate more on the energy efficiency. One way to achieve this is by implementing the sorting algorithms in such a programming language which consumes least amount of energy which is our current area of research in this paper. In this study, our main goal is to find such a programming language which consumes least amount of energy and contributes to green computing. In our experiment, we implemented different sorting algorithms in different programming languages in order to find the most power-efficient language.

Tej Bahadur Chandra, Pushpak Verma, Anuj Kumar Dwivedi
Crawling Social Web with Cluster Coverage Sampling

Social network can be viewed as a huge container of nodes and relationship edges between the nodes. Covering every node of social network in the analysis process faces practical inabilities due to gigantic size of social network. Solution to this is to take a sample by collecting few nodes and relationship status of huge network. This sample can be considered as a representative of complete network, and analysis is carried out on this sample. Resemblance of results derived by analysis with reality majorly depends on the extent up to which a sample resembles with its actual network. Sampling, hence, appears to be one of the major challenges for social network analysis. Most of the social networks are scale-free networks and can be seen having overlapping clusters. This paper develops a robust social Web crawler that uses a sampling algorithm which considers clustered view of social graph. Sample will be a good representative of the network if it has similar clustered view as actual graph.

Atul Srivastava, Anuradha, Dimple Juneja Gupta
Efficient Management of Web Data by Applying Web Mining Pre-processing Methodologies

Web usage mining is defined as the application of data mining techniques to extract interesting usage patterns from Web data. Web data provides the information about Web user’s behavior. Pre-processing of Web data is an essential process in Web usage mining. This is used to convert the raw data into processed data which is necessary for Web mining task. In this research paper, author proposed the effective Pre-processing methodology which involves field extraction, significant attributes selection, data selection, and data cleaning. The efficient proposed methodology improves the quality of Web data by managing missing values, noise, inconsistency, and incompleteness which is usually found attached with data. Moreover, obtained results of pre-processing will be further used in frequent pattern discovery.

Jaswinder Kaur, Kanwal Garg
A Soft Computing Approach to Identify Multiple Paths in a Network

This paper presents a modified technique to generate the probability table (routing table) for the selection of path for an ant in AntNet algorithm. This paper also uses the concept of probe ant along with clone ant. The probe ant identifies the multiple paths which can be stored at destination. Overall purpose of this paper is to get an insight into the ant-based algorithms and identifying multiple optimal paths.

Shalini Aggarwal, Pardeep Kumar, Shuchita Upadhyaya
An Efficient Focused Web Crawling Approach

The amount of data and its dynamicity makes it very difficult to crawl the World Wide Web (WWW) completely. It is a challenge in front of researchers to crawl only the relevant pages from this huge Web. Thus, a focused crawler resolves this issue of relevancy to a certain level, by focusing on Web pages for some given topic or a set of topics. This paper deals with survey of various focused crawling techniques which are based on different parameters to find the advantages and drawbacks for relevance prediction of URLs. This paper formulates the problem after analysing the existing work on focused crawlers and proposes a solution to improve the existing focused crawler.

Kompal Aggarwal
A New Log Kernel-Based Possibilistic Clustering

An unsupervised possibilistic (UPC) algorithm with the use of validity indexes has already been proposed. Although UPC works well, it does not show a good accuracy for non-convex cluster structure. To overcome this limitation, we have proposed a kernel version of UPC with a conditionally positive-definite kernel function. It has the ability to detect clusters with different shapes and convex structures because it transforms data into high-dimensional space. Our proposed algorithm, the kernelized UPC-Log(UKPC-L), is an extension of UPC, by introducing log kernel function, which is only conditionally positive-definite function. This makes the performance of our proposed algorithm better than UPC in case of non-convex cluster structures. This has been demonstrated by the results obtained on several real and synthetic datasets. We have compared the performance of UPC and our proposed algorithm using the concept of misclassification, accuracy and error rate to show its efficiency and accuracy.

Meena Tushir, Jyotsna Nigam
Fuzzy c-Means Clustering Strategies: A Review of Distance Measures

In the process of clustering, our attention is to find out basic procedures that measures the degree of association between the variables. Many clustering methods use distance measures to find similarity or dissimilarity between any pair of objects. The fuzzy c-means clustering algorithm is one of the most widely used clustering techniques which uses Euclidean distance metrics as a similarity measurement. The choice of distance metrics should differ with the data and how the measure of their comparison is done. The main objective of this paper is to present mathematical description of different distance metrics which can be acquired with different clustering algorithm and comparing their performance using the number of iterations used in computing the objective function, the misclassification of the datum in the cluster, and error between ideal cluster center location and observed center location.

Jyoti Arora, Kiran Khatter, Meena Tushir
Noise Reduction from ECG Signal Using Error Normalized Step Size Least Mean Square Algorithm (ENSS) with Wavelet Transform

This paper presents the reduction of baseline wander noise found in ECG signals. The reduction has been done using wavelet transform inspired error normalized step size least mean square (ENSS-LMS) algorithm. We are presenting a wavelet decomposition-based filtering technique to minimize the computational complexity along with the good quality of output signal. The MATLAB simulation results validate the good noise rejection in output signal by analyzing parameters, excess mean square error (EMSE) and misadjustment.

Rachana Nagal, Pradeep Kumar, Poonam Bansal
A Novel Approach for Extracting Pertinent Keywords for Web Image Annotation Using Semantic Distance and Euclidean Distance

The World Wide Web today comprises of billions of Web documents with information on varied topics presented by different types of media such as text, images, audio, and video. Therefore along with textual information, the number of images over WWW is exponentially growing. As compared to text, the annotation of images by its semantics is more complicated as there is a lack of correlation between user’s semantics and computer system’s low-level features. Moreover, the Web pages are generally composed of contents containing multiple topics and the context relevant to the image on the Web page makes only a small portion of the full text, leading to the challenge for image search engines to annotate and index Web images. Existing image annotation systems use contextual information from page title, image src tag, alt tag, meta tag, image surrounding text for annotating Web image. Nowadays, some intelligent approaches perform a page segmentation as a preprocessing step. This paper proposes a novel approach for annotating Web images. In this work, Web pages are divided into Web content blocks based on the visual structure of page and thereafter the textual data of Web content blocks which are semantically closer to the blocks containing Web images are extracted. The relevant keywords from textual information along with contextual information of images are used for annotation.

Payal Gulati, Manisha Yadav
Classification of Breast Tissue Density Patterns Using SVM-Based Hierarchical Classifier

In the present work, three-class breast tissue density classification has been carried out using SVM-based hierarchical classifier. The performance of Laws’ texture descriptors of various resolutions have been investigated for differentiating between fatty and dense tissues as well as for differentiation between fatty-glandular and dense-glandular tissues. The overall classification accuracy of 88.2% has been achieved using the proposed SVM-based hierarchical classifier.

Jitendra Virmani, Kriti, Shruti Thakur
Advances in EDM: A State of the Art

Potentials of data mining in academics have been discussed in this paper. To enhance the Educational Institutional services along with the improvement in student’s performance by increasing their grades, retention rate, maintain their attendance, giving prior information about their eligibility whether they can give examination or not based on attendance, evaluating the result using the marks, predicting how many students have enrolled in which course and all other aspects like this can be analyzed using various fields of Data Mining. This paper discusses one of this aspect in which the distinction has been predicted based on the marks scored by the MCA students of Bharati Vidyapeeth Institute of Computer Applications and Management, affiliated to GGSIPU using various machine learning algorithms, and it has been observed that “Boost Algorithm” outperforms other machine learning models in the prediction of distinction.

Manu Anand
Proposing Pattern Growth Methods for Frequent Pattern Mining on Account of Its Comparison Made with the Candidate Generation and Test Approach for a Given Data Set

Frequent pattern mining is a very important field for mining of association rule. Association rule mining is an important technique of data mining that is meant to extract meaningful information from large data sets accumulated as a result of various data processing activities. There are several algorithms proposed for having solution to the problem of frequent pattern mining. In this paper, we have mathematically compared two most widely used approaches, such as candidate generation and test and pattern growth approaches to search for the better approach for a given data set. In this paper, we came to conclusion that the pattern growth methods are more efficient in maximum cases for the purpose of frequent pattern mining on account of their cache conscious behavior. In this paper, we have taken a data set and have implemented both the algorithms on that data set; the experimental result of the working of both the algorithms for the given data set shows that the pattern growth approach is more efficient than the candidate generation and test approach.

Vaibhav Kant Singh
A Study on Initial Centroids Selection for Partitional Clustering Algorithms

Data mining tools and techniques allow an organization to make creative decisions and subsequently do proper planning. Clustering is used to determine the objects that are similar in characteristics and group them together. K-means clustering method chooses random cluster centres (initial centroid), one for each centroid, and this is the major weakness of K-means. The performance and quality of K-means strongly depends on the initial guess of centres (centroid). By augmenting K-means with a technique of selecting centroids, several modifications have been suggested in research on clustering. The first two main authors of this paper have also developed three algorithms that unlike K-means do not perform random generation of the initial centres and actually produce same set of initial centroids for the same input data. These developed algorithms are sum of distance clustering (SODC), distance-based clustering algorithm (DBCA) and farthest distributed centroid clustering (FDCC). We present a brief survey of the algorithms available in the research on modification of initial centroids for K-means clustering algorithm and further describe the developed algorithm farthest distributed centroid clustering in this paper. The experimental results carried out show that farthest distributed centroid clustering algorithm produces better quality clusters than the partitional clustering algorithm, agglomerative hierarchical clustering algorithm and the hierarchical partitioning clustering algorithm.

Mahesh Motwani, Neeti Arora, Amit Gupta
A Novel Rare Itemset Mining Algorithm Based on Recursive Elimination

Pattern mining in large databases is the fundamental and a non-trivial task in data mining. Most of the current research focuses on frequently occurring patterns, even though less frequently/rarely occurring patterns benefit us with useful information in many real-time applications (e.g., in medical diagnosis, genetics). In this paper, we propose a novel algorithm for mining rare itemsets using recursive elimination (RELIM)-based method. Simulation results indicate that our approach performs efficiently than existing solution in time taken to mine the rare itemsets.

Mohak Kataria, C. Oswald, B. Sivaselvan
Computation of Various Entropy Measures for Anticipating Bugs in Open-Source Software

Bugs could be introduced at any phase of software development process. Bugs are recorded in repositories, which occur due to frequent changes in source code of software to meet the requirements of organizations or users. Open-source software is frequently updated and source codes are changed continuously due to which source code becomes complicated and hence bugs appear frequently. Bug repair process includes addition of new feature, enhancement of existing feature, some faults or other maintenance task. Entropy measures the uncertainty, thus helpful in studying code change process. In this paper, bugs reported in various subcomponents of Bugzilla open-source software are considered; changes are quantified in terms of entropies using Renyi, Havrda–Charvat, and Arimoto entropy measures of each component for all changes in components. A linear regression model using SPSS is applied to detect the expected bugs in the Bugzilla subcomponents. Performance has been measured using goodness-of-fit curve and other R-square residuals.

H. D. Arora, Talat Parveen
Design of Cyber Warfare Testbed

Innovations doing fine in a predictable, controlled environment may be much less effective, dependable or manageable in a production environment, more so in cyber systems, where every day there is new technology in malware detection, zero-day vulnerabilities are coming up. Considering NCSP-2013, authors propose a realistic cyber warfare testbed using XenServer hypervisor, commodity servers and open-source tools. Testbed supports cyber-attack and defence scenarios, malware containment, exercise logs and analysis to develop tactics and strategies. Further, authors provide ways and means to train cyber warriors, honing their skills as well as maturing attack and defence technologies on this testbed.

Yogesh Chandra, Pallaw Kumar Mishra
Performance Evaluation of Features Extracted from DWT Domain

The key task of Steganalyzer is to identify if a carrier is carrying hidden information or not. Blind Steganalysis can be tackled as two-class pattern recognition problem. In this paper, we have extracted two sets of feature vectors from discrete wavelet transformation domain of images to improve performance of a Steganalyzer. The features extracted are histogram features with three bins 5, 10, and 15 and Markov features with five threshold values 2, 3, 4, 5, 6, respectively. The performance of two feature sets is compared among themselves and with existing Farid discrete wavelet transformation features based on parameter classification accuracy using neural network back-propagation classifier. In this paper, we are using three Steganography algorithms outguess, nsF5 and PQ with various embedding capacities.

Manisha Saini, Rita Chhikara
Control Flow Graph Matching for Detecting Obfuscated Programs

Malicious programs like the viruses, worms, Trojan horses, and backdoors infect host computers by taking advantage of flaws of the software and thereby introducing some kind of secret functionalities. The authors of these malicious programs attempt to find new methods to get avoided from detection engines. They use different obfuscation techniques such as dead code insertion, instruction substitution to make the malicious programs more complex. Initially, obfuscation techniques those are used by software developers to protect their software from piracy are now misused by these malware authors. This paper intends to detect such obfuscated programs or malware using control flow graph (CFG) matching technique, using VF2 algorithm. If the original CFG of the executable is found to be isomorphic to subgraph of obfuscated CFG (under examination), then it can be classified as an obfuscated one.

Chandan Kumar Behera, Genius Sanjog, D. Lalitha Bhaskari
A Novel Framework for Predicting Performance of Keyword Queries Over Database

In fast-growing information era, business or commercial RDBMS provides huge data access in the form of distributed database, and this data can be accessed by keyword queries very easily, because no need to know structured query languages which is problem for almost end user, and they do not know SQL but needs efficient result; with the help of keyword query interface (KQI), it is possible but often has problem like low ranking quality, i.e., low precision values or recall values, as shown in recent benchmarks. Because of ambiguity in keyword-based search, query result needs improvement. Effectiveness of keyword query should be decided based on query result. It attracts more improvement because huge data creates more complication, and for efficient results, query should be appropriate. Commercial database must support efficient approach to deal with such issues like low precision value of result; by existing methods, precision value is low because of ambiguous interpretation of queries, so in this paper, we try to rank the result according to similarity score based on mathematical model to find proper ranking of result to give efficient result of keyword-based queries.

Mujaffar Husain, Udai Shanker
Predicting and Accessing Security Features into Component-Based Software Development: A Critical Survey

Software development communities have made the true venture to software development through the concept of component-based software approach and commercial off-the-shelf (COTS). Majority of the present software applications have software components as the basic elements, and component-based software development (CBSD) has been successful in building applications and systems. However, the security of CBS for the software component is still not properly noticed. Developing secure software is the accountability of all the stakeholders involved with component-based software development model. The main challenge is to access how much security we have achieved or how can we predict and evaluate the security at the early stage of software development? Software security has a very constructive impact on software productivity, maintainability, cost, and quality. Therefore, more early we introduce and evaluate security into component-based software more productivity and quality we can achieve. In this paper, efforts are done to provide some suitable guiding principle to the software engineers in the development of secure component-based software products. This paper also discusses the overview of requirement specification and analyzes the software architectures and design for developing secure component-based software products.

Shambhu Kr. Jha, R. K. Mishra
Traditional Software Reliability Models Are Based on Brute Force and Reject Hoare’s Rule

This research analyses the causes for the inaccurate estimations and ineffective assessments of the varied traditional software reliability growth models. It attempts to expand the logical foundations of software reliability by amalgamating techniques first applied in geometry, other branches of mathematics and later in computer programming. The paper further proposes a framework for a generic reliability growth model that can be applied during all phases of software development for accurate runtime control of self-learning, intelligent, service-oriented software systems. We propose a new technique to employing runtime code specifications for software reliability. The paper aims at establishing the fact that traditional models fail to ensure reliable software operation as they employ brute force mechanisms. Instead, we should work on embedding reliability into software operation by using a mechanism based on formal models like Hoare’s rule.

Ritika Wason, A. K. Soni, M. Qasim Rafiq
Object-Oriented Metrics for Defect Prediction

Today, defect prediction is an important part of software industry to meet deadlines for their products. Defect prediction techniques help the organizations to use their resources effectively which results in lower cost and time requirements. Various metrics are used for defect prediction in within company (WC) and cross-company (CC) projects. In this paper, we used object-oriented metrics to build a defect prediction model for within company and cross-company projects. In this paper, feed-forward neural network (FFNN) model is used to build a defect prediction model. The proposed model was tested over four datasets against within company defect prediction (WCDP) and cross-company defect prediction (CCDP). The proposed model gives good results for WCDP and CCDP as compared to previous studies.

Satwinder Singh, Rozy Singla
A Comparative Analysis of Static and Dynamic Java Bytecode Watermarking Algorithms

Software piracy is one of the most serious issues confronted by software industry creating a huge number of dollars misfortune consistently to the product creating organizations. The worldwide income misfortune was assessed to be more than $62.7 billion in the year 2013 because of the product theft. Software watermarking demoralizes theft, as a proof of procurement or origin, and likewise helps in following the wellspring of unlawful redistribution of duplicates of programming. In this paper, we have compared and analyzed the static and dynamic Java bytecode watermarking algorithms. Firstly, each Java jar file is watermarked using the watermarking algorithms, and after this, distortive attacks are applied to each watermarked program by applying obfuscation and optimizing. After studying the results obtained, we found that dynamic watermarking algorithms are slightly better than static watermarking algorithms.

Krishan Kumar, Prabhpreet Kaur
Software Architecture Evaluation in Agile Environment

The function and significance of mission-critical software-intensive systems have got substantial recognition. Software architecture has become a new field since system software is all the time more intricate. Agile software development counters the advancement in requirement, besides to attend to the fixed plan. In this paper, the effort has been made to find parameters for software architecture evaluation and then evaluate software architecture under agile environment based on the determined parameters.

Chandni Ahuja, Parminder Kaur, Hardeep Singh
Mutation Testing-Based Test Suite Reduction Inspired from Warshall’s Algorithm

This paper presents an approach that provides a polynomial time solution for the problem of test suite reduction or test case selection. The proposed algorithm implements dynamic programming as an optimisation technique that uses memorisation which is conceptually similar to the technique used in Floyd–Warshall’s algorithm for all pair shortest path problem. The approach presents encouraging results on TCAS code in C language from Software-artifact Infrastructure Repository (SIR).

Nishtha Jatana, Bharti Suri, Prateek Kumar
Software Component Retrieval Using Rough Sets

Software reusability is one of the important mechanisms needed to maintain its quality and productivity. Even though the candidate components are available in the repository to be reused, software engineers prefer to develop the system from scratch. There exists fear of reuse because the developers are not sure whether candidate component will work. In this paper, first the focus is on retrieving the desired components based on rule generation using rough sets. If the component is not found, then it can be developed and stored in repository. Secondly, uncertainty in identifying the desired component is addressed and an approach to model such uncertainty is proposed. Rough set exploration system (RSES) tool is used to simulate the results on certain behaviors of banking domain.

Salman Abdul Moiz
Search-Based Secure Software Testing: A Survey

In today’s era, each software developer is developing enormous products consisting non-functional requirements but fails to provide security. Metaheuristic search is used to estimate the test cases with the help of fitness functions, search-based secure software testing (SBST) Security is not possible without the vulnerabilities in the software. The overall objective is to study various vulnerabilities and various metaheuristic techniques. The results of the survey highlighted the numerous fitness functions that could lead to security, further tools were mentioned for various vulnerability scans. The research questions and corresponding solutions to enlighten the scenario are provided in this survey.

Manju Khari, Vaishali, Manoj Kumar
Limitations of Function Point Analysis in Multimedia Software/Application Estimation

Till date the Function Point Analysis (FPA) was and it is mostly accepted size estimation method for the software sizing community and it is still in use. In developing the software system, software projects cost plays very important role before it is developed in the context of size and effort. Allan J. Albrecht in 1979 developed the FPA, which, with some variations has been well accepted by the academicians and practitioner (Gencel and Demirors in ACM Trans Softw Eng Methodol 17(3):15.1–15.36, 2008) [1]. For any software development project, estimation of its size, completion time, effort required, and finally the cost estimation are critically important. Estimation assists in fixing exact targets for project completion. In software industry, the main concern for the software developers is the size estimation and its measurement. The old estimation technique—line of code—cannot solve the purpose of size estimating requirements for multilanguage programming skill capabilities and its ongoing size growing in the application development process. However, by introducing the FP, we can resolve these difficulties to some degree. Gencel and Demirors proposed an estimation method to analyze software effort based on function point in order to obtain effort required in completion of the software project. They concluded that the proposed estimation method helps to estimate software effort more precisely without bearing in mind the languages or developing environment. Project manager can have the track on the project progress, control the cost, and ensure the quality accurately using given function point (Zheng et al. in estimation of software projects effort based on function point, IEEE, 2009) [2]. But, the use of multimedia technology has provided a different path for delivering instruction. A two-way multimedia training is a process, rather than a technology, because of that interested users are being benefited and have new learning capabilities. Multimedia software developer should use suitable methods for designing the package which will not only enhance its capabilities but will also be user-friendly. However, FPA has its own limitations, and it may not estimate the size of multimedia software projects. The characteristics and specifications of multimedia software applications do not fall under FPA specifications. The use of FPA for multimedia software estimation may lead to wrong estimates and incomplete tasks which will end up into annoying all the stakeholders of the project. This research paper is an attempt to find out the constraint of function point analysis based on highlighting the critical issues (Ferchichi et al. in design system engineering of software products implementation of a software estimation model, IMACS-2006, Beijing, China, 2006) [3].

Sushil Kumar, Ravi Rastogi, Rajiv Nag
Maintainability Analysis of Component-Based Software Architecture

The analysis of the maintainability of a component-based software system (CBSS) architecture is a critical issue as it majorly contributes to the overall quality, risks, and economics of the software product life cycle. Architectural styles’ features of CBSS, which characterize maintainability, are identified and represented as architecture-style-maintainability digraph. The maintainability scenarios are represented by the digraph nodes. The edges of the digraph represent the degree of influence among the scenarios. A detailed procedure for the maintainability analysis of CBSS is suggested through a maintainability function. The scenario maintainability index SMI measures the maintainability of a system. A lower value of the SMI implies better maintainability of the system. The maintainability analysis procedure mentioned in the paper helps the key stakeholders of the CBSS in managing, controlling, and improvising the maintainability of a system by appropriately incorporating maintainability scenarios in heterogeneous architectural styles.

Nitin Upadhyay
An Assessment of Vulnerable Detection Source Code Tools

The commonly used programming language includes C and C++ for the software development and even introduced as a course contents in computer applications in number of institutions. As software development proceeds through various phases of system development life cycle, the design phase and coding phase have the greatest impact of the rest of phases, so every software development should have a good user interface and database design including writing a source code in order to make user interface active.

Anoop Kumar Verma, Aman Kumar Sharma
Devising a New Method for Economic Dispatch Solution and Making Use of Soft Computing Techniques to Calculate Loss Function

This paper has a description of a new method which is designed for the economic dispatch problem of power system. This method demonstrates a new technique for calculating loss in the economic dispatch problem. This technique can be utilized for online generation of solution by using soft computing methods to find out loss function in the solution. A new method to find out the loss function using two new parameters is described here. Fuzzy sets and genetic algorithm are used to find a penalty term based on the values of these two parameters. Thus, all the calculations required to accommodate loss function in the solution of economic dispatch are presented here. The algorithm for the new proposed system is presented in this paper.

Ravindra Kumar Chahar, Aasha Chuahan
Trusted Operating System-Based Model-Driven Development of Secure Web Applications

This paper adds security engineering into an object-oriented model-driven software development for real-life Web applications. In this paper, we use mining patterns in Web applications. This research paper proposes a unified modeling language-based secure software maintenance procedure. The proposed method is applied for maintaining a large-scale software product and real-life product-line products. After modeling, we can implement and run this Web application, on SPF-based trusted operating systems. As we know, reverse engineering of old software is focused on the understanding of legacy program code without having proper software documentation. The extracted design information was used to implement a new version of the software program written in C++. For secure designing of Web applications, this paper proposes system security performance model for trusted operating system. For re-engineering and re-implementation process of Web applications, this paper proposes the model-driven round-trip engineering approach.

Nitish Pathak, Girish Sharma, B. M. Singh
Navigational Complexity Metrics of a Website

Navigation is the ease with which user traverses through a website while searching for information. The smooth is the navigation, the better are the chances of finding our concerned piece of information. Hence, it can be considered as an important parameter that contributes to the usability of the website. There are several factors that enhance the complexity of navigation of website. The important ones are website structural complexity, broken links, path length, maximum depth, etc. In this study, navigational complexity of seven websites is evaluated and compared on these parameters.

Divyam Pandey, Renuka Nagpal, Deepti Mehrotra
Evaluation and Comparison of Security Mechanisms In-Place in Various Web Server Systems

This paper presents a novel approach to study, identify, and evaluate the security mechanisms in-place across various Web server platforms. These security mechanisms are collected and compiled from various sources. A set of security checks are framed to identify the implementation of these security mechanisms in diverse Web server platforms. The paper is concluded with a case study which implements this approach.

Syed Mutahar Aaqib, Lalitsen Sharma
Component-Based Quality Prediction via Component Reliability Using Optimal Fuzzy Classifier and Evolutionary Algorithm

Sequentially to meet the rising necessities, software system has become more complex for software support from profuse varied areas. In software reliability engineering, many techniques are available to ensure the reliability and quality. In design models, prediction techniques play an important role. In case of component-based software systems, accessible reliability prediction approaches experience the following drawbacks and hence restricted in their applicability and accuracy. Here, we compute the application reliability which is estimated depend upon the reliability of the individual components and their interconnection mechanisms. In our method, the quality of the software can be predicted in terms of reliability metrics. After that the component-based feature extraction, the reliability is calculated by optimal fuzzy classifier (OFC). Here, the fuzzy rules can be optimized by evolutionary algorithms. The implementation is done via JAVA and the performance is analyzed with various metrics.

Kavita Sheoran, Om Prakash Sangwan
Applying Statistical Usage Testing Along with White Box Testing Techniques

Cleanroom software engineering (CSE) reference model is a rigorous incremental model that focuses on defect prevention using sound mathematical principles combined with statistical usage testing (Linger, Trammell, in cleanroom software engineering reference model, 1996, [1]). Similar to the concept of hardware cleanrooms, this model is also used for the development of zero defect and extremely reliable software (Mills, Poore, in Quality Progress, 1988, [2]). Statistical usage testing (SUT) is a technique defined for testing as a part of CSE model [1]. The technique works by performing usage modelling and assigning usage probabilities (Runeson, Wohlin in IEEE Trans Softw Eng 20(6): 494–499, 1994, [3]). Next statistical tests are carried out on the usage models [3]. CSE relies on SUT for testing, and unit testing is not defined in the CSE process (Hausler et al. in IBM Syst J 33(1): 89, 109, 1994, [4]). However, additional testing can be carried out along with SUT depending on the need (Prowell et al. in cleanroom software engineering technology and process, 1999, [5]). The paper presents the usefulness and advantages of applying SUT along with various white box testing techniques. The white box testing techniques used in the paper are data flow testing, control flow testing and mutation testing.

Sunil Kumar Khatri, Kamaldeep Kaur, Rattan Datta
A Review on Application Security Management Using Web Application Security Standards

Software influences almost every aspect of modern society. Development of quality software systems has always been a great experiment for software developers. By and by, it happens that non-practical elements are frequently disregarded while concentrating on the usefulness of the framework. A few frameworks have fizzled due to the carelessness of non-utilitarian necessities. As of late, Web application security has turned into the essential talk for security specialists, as application assaults are always on rise and posturing new dangers for associations. A few patterns have risen recently in the assaults propelled against Web application. The execution of international security standard is to minimize the security disappointments and to moderate their results. Applications have been helpless for whatever length of time that they have existed. To ponder the effect of non-utilitarian prerequisites on necessities development, we are proposing a very important non-functional requirement Application Security Management is to define the requirements for security in all applications that use the web application security standards (WASS).

A. Rakesh Phanindra, V. B. Narasimha, Ch. V. PhaniKrishna
A Review of Software Testing Approaches in Object-Oriented and Aspect-Oriented Systems

Software testing is considered to be a very important phase in the development of any software. It becomes crucial to inculcate appropriate software testing techniques in every software development life cycle. Object-oriented software development has been in use for a while now. Aspect-oriented approach which is comparatively new and works on the basics of object-oriented approach. But aspect-oriented approach also aims to provide modularity, higher cohesion, and separation of concerns. In this paper, we have reviewed the various testing techniques that are developed for both object-oriented and aspect-oriented systems.

Vasundhara Bhatia, Abhishek Singhal, Abhay Bansal, Neha Prabhakar
A Literature Survey of Applications of Meta-heuristic Techniques in Software Testing

Software testing is a phenomenon of testing the entire software with the objective of finding defects in the software and to judge the quality of the developed system. The performance of the system is degraded if bugs are present in the system. Various meta-heuristic techniques are used in the software testing for its automation and optimization of testing data. This survey paper demonstrates the review of various studies, which used the concept of meta-heuristic techniques in software testing.

Neha Prabhakar, Abhishek Singhal, Abhay Bansal, Vasundhara Bhatia
A Review of Test Case Prioritization and Optimization Techniques

Software testing is a very important and crucial phase of software development life cycle. In order to develop good quality software, the effectiveness of the software has been tested. Test cases and test suites are prepared for testing, and it should be done in minimum time for which test case prioritization and optimization techniques are required. The main aim of test case prioritization is to test software in minimum time and with maximum efficiency, so for this there are many techniques, and to develop a new or better technique, existing techniques should be known. This paper presents a review on the techniques of test case prioritization and optimization. This paper also provides analysis of the literature available for the same.

Pavi Saraswat, Abhishek Singhal, Abhay Bansal
Software Development Activities Metric to Improve Maintainability of Application Software

The maintenance is very important activity of the software development life cycle. The maximum percentage of design and development cost of software system is going into maintenance to incorporate the change into functional requirement. The increased functional requirements lead towards system configuration changes, which may further increase the cost of development. Every software company want to design and develop the software which is easy to maintain at lower costs. It is better to design and develop more maintainable software to meet this objective; this paper proposed software development activities metric, which will help software developers to develop the easy-to-maintain application software.

Adesh Kumar pandey, C. P. Agrawal
Label Count Algorithm for Web Crawler-Label Count

Web crawler is a searching tool or a program that glance the World Wide Web in an automated style. Through GUI of the crawler, user can specify the URL and all the links related are retrieved and annexed to the crawl frontier, which is a tally to visit. The links are then checked and retrieved from the crawl frontier. The algorithms for crawling the Web are vital when it comes to select any page which meets the requirement of any user. The present paper analyzes the analysis on the Web crawler and its working. It proposes a new algorithm, named as label count algorithm by hybridization of existing algorithms. Algorithm labels the frequently visited site and selects the best searches depending on the highest occurrence of keywords present in a Web page.

Laxmi Ahuja
Vulnerability Discovery in Open- and Closed-Source Software: A New Paradigm

For assisting the developers in process of software development, vulnerability discovery models were developed by researchers which helped in discovering the vulnerabilities with time. These models facilitate the developers in patch management while providing assistance in optimal resource allocation and assessing associated security risks. Among the existing models for vulnerability discovery, Alhazmi–Malaiya logistic model is considered the best-fitted model on all kinds of datasets owing to its ability to capture s-shaped nature of the curves. But, it has the limitation of dependence on shape of dataset. We have proposed a new model that is shape-independent accounting for better goodness of fit as compared to the earlier VDM. The proposed model and Alhazmi–Malaiya logistic model for vulnerability discovery has been evaluated on three real-life datasets each for open- and closed- source software, and the results are presented toward the end of the paper.

Ruchi Sharma, R. K. Singh
Complexity Assessment for Autonomic Systems by Using Neuro-Fuzzy Approach

IT companies want to reach the highest level in the development of best product within a balance cost. But with this development, systems and network complexity are increasing thus leading toward unmanageable systems. Therefore, there is a strong need for the development of self-managed systems which will manage its internal activities without or with minimum human intervention. This type of systems is called as autonomic systems and is enabled with self-abilities. However, there are both the sides of the autonomic systems. Due to the implementation of autonomic capabilities in the system, overall complexity is also increased. In the present paper, authors extended their approach by using the neuro-fuzzy-based technique to predict the complexity of systems with autonomic features. Results obtained are comparatively better than previous work where authors applied fuzzy logic-based approach to predict the same. The proposed work may be used to assess the maintenance level required for autonomic systems, as higher complexity index due to autonomic features will lead toward low maintenance cost.

Pooja Dehraj, Arun Sharma
Proposal for Measurement of Agent-Based Systems

The software industry is always striving for new technologies to improve the productivity of software and meet the requirement of improving the quality, flexibility, and scalability of systems. In the field of software engineering, the software development paradigm is shifting towards ever-increasing flexibility and quality of software products. A measure of the quality of software is therefore essential. Measurement methods must be changed to accommodate the new paradigm as traditional measurement methods are no longer suitable. This paper discusses the significant measurement factors as they relate to agent-based systems, and proposes some metrics suitable for use in agent-based systems.

Sangeeta Arora, P. Sasikala
Optimal Software Warranty Under Fuzzy Environment

Prolonged testing ensures a higher reliability level of the software, but at the same time, it adds to the cost of production. Moreover, due to stiff contention in the market, developers cannot spend too much time on testing. So, they offer a warranty with the software to attract customers and to gain their faith in the product. But servicing under warranty period incurs high costs at the developer end. Due to this, determining optimal warranty period at the time of software release is an imperative concern for a software firm. Determination of optimal warranty is a trade-off between providing maximum warranty at minimum cost. One of the prime assumptions in the existing cost models in software reliability is that the cost coefficients are static and deterministic. But in reality, these constants are dependent on various non-deterministic factors thus leading to uncertainty in their exact computation. Using fuzzy approach in the cost model overcomes the uncertainty in obtaining the optimal cost value. In this paper, we addressed this issue and proposed a generalized approach to determine the optimal software warranty period of a software under fuzzy environment, where testing and operational phase are governed by different distribution functions. Validation of the proposed model is done by providing a numerical example.

A. K. Shrivastava, Ruchi Sharma
Automation Framework for Test Script Generation for Android Mobile

System testing involves activities such as requirement analysis, test case design, test case writing, test script development, test execution, and test report preparation. Automating all these activities involves many challenges such as understanding scenarios, achieving test coverage, determining pass/fail criteria, scheduling tests, documenting result. In this paper, a method is proposed to automate both test case and test script generation from sequence diagram-based scenarios. A tool called Virtual Test Engineer is developed to convert UML sequence diagram into Android APK to test Android mobile applications. A case study is done to illustrate this method. The effectiveness of this method is studied and compared with other methods through detailed experimentation.

R. Anbunathan, Anirban Basu
Optimizing the Defect Prioritization in Enterprise Application Integration

Defect prioritization is one of the key decisions that impacts the quality, cost, and schedule of any software development project. There are multiple attributes of defects that drive the decision of defect prioritization. Generally in practice, the defects are prioritized subjectively based on few attributes of defects like severity or business priority. This assignment of defect priority does not consider other critical attributes of the defect. There is a need of a framework that collectively takes into consideration critical attributes of defects and generates the most optimum defect prioritization strategy. In this paper, critical attributes of defects are considered and a new framework based on genetic algorithm for generating optimized defect prioritization is proposed. The results from the experimental execution of the algorithm show the effectiveness of the proposed framework and improvement by 40% in the overall quality of these projects.

Viral Gupta, Deepak Kumar, P. K. Kapur
Desktop Virtualization—Desktop as a Service and Formulation of TCO with Return on Investment

Cloud computing provides multiple deployment options to organizations such as Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS). Desktop as a Service is upcoming deployment model which has a multi-tenancy architecture, and the service is available on a subscription basis. Desktop as a Service (DaaS) authorizes users to access the applications anytime using virtualized desktop. Virtualized desktops hosted on cloud provided flexibility to use any device. With no software for IT to maintain, Desktop as a Service is straightforward to buy and easy to manage. This paper describes the needs of Desktop as a Service, benefits of DaaS, barriers to widespread DaaS adoption, comparison of industry DaaS leaders, and describes its application in real. A general model and framework have also been developed to calculate the total cost of ownership and return on investment to help the organization to take decisions and have relevant discussions to adopt DaaS.

Nitin Chawla, Deepak Kumar
An Assessment of Some Entropy Measures in Predicting Bugs of Open-Source Software

In software, source code changes are expected to occur. In order to meet the enormous requirements of the users, source codes are frequently modified. The maintenance task is highly complicated if the changes due to bug repair, enhancement, and addition of new features are not reported carefully. In this paper, concurrent versions system (CVS) repository (http://bugzilla.mozilla.org) is taken into consideration for recording bugs. These observed bugs are collected from some subcomponents of Mozilla open-source software. As entropy is helpful in studying the code change process, and various entropies, namely Shannon, Renyi, and Tsallis entropies, have been evaluated using these observed bugs. By applying simple linear regression (SLR) technique, the bugs which are yet to come in future are predicted based on current year entropy measures and the observed bugs. Performance has been measured using various R2 statistics. In addition to this, ANOVA and Tukey test have been applied to statistically validate various entropy measures.

Vijay Kumar, H. D. Arora, Ramita Sahni
A Path Coverage-Based Reduction of Test Cases and Execution Time Using Parallel Execution

A novel method for ameliorating the effectiveness of software testing is proposed, which focuses the reduction of number of test cases using prevailing techniques. The domains of each input variable are reduced by using ReduceDomains algorithm. Then, the method allocates fixed values to variables using algebraic conditions for the reduction of number of test cases by using prevailing method. By performing this, the values of the variables would be restricted in a fixed range, subsequently making lesser number of potential test cases to process. The projected method is based on the parallel execution in which all the independent paths are executed parallelly in spite of sequential execution and reduces the number of test cases and execution time.

Leena Singh, Shailendra Narayan Singh
iCop: A System for Mitigation of Felonious Encroachment Using GCM Push Notification

A mobile application can enable a user to get control on an infrared or IR sensor device, remotely and also along with detecting and notifying of any movement that should happen around the sensor via GPRS. To further improve this thought, we interfaced the mobile technology with hardware. The focus is placed on getting the control of the secured zones, by monitoring the human movement IR sensors alongside some other movement that might happen in the fringe of the sensor. The use of this thought can be extremely very much expected in the circumstances that are exceedingly secure and in which, not withstanding manual security, electronic and mobile security efforts to establish safety are likewise needed. These spots could be banking sector, money midsections, criminal prisons, atomic establishments, confidential and very secret exploration zones, and so forth. The paper implements an intrusion detection system iCop and also presents the testing results showing expected output and actual output. Also, the risk factors and their corresponding mitigation plan are discussed.

Saurabh Mishra, Madhurima Hooda, Saru Dhir, Alisha Sharma
Clustering the Patent Data Using K-Means Approach

Today patent database is growing in size and companies want to explore this dataset to have an edge for its competitor. Retrieving a suitable patent from this large dataset is a complex task. This process can be simplified if one can divide the dataset into clusters. Clustering is the task of grouping datasets either physical or abstract objects into classes of similar objects. K-means is a simple clustering technique which groups the similar items in the same cluster and dissimilar items in different cluster. In this study, the metadata associated with database is used as attribute for clustering. The dataset is evaluated using average distance centroid method. The performance is validated via Davies–Bouldin index.

Anuranjana, Nisha Mittas, Deepti Mehrotra
Success and Failure Factors that Impact on Project Implementation Using Agile Software Development Methodology

In the agile software development, there are different factors behind the success and failure of projects. Paper represents the success, failure, and mitigation factors in agile development. A case study is presented depending on all of these factors after the completion of small projects. Each team grouped into 10 team members and developed the project with different approaches. Each group maintained the documentation from initial user stories and factors employed on the projects. Final outcomes are observed based on the analysis of efficiency, accuracy, time management, risk analysis, and product quality of the project. Final outcomes are identified using the different approaches.

Saru Dhir, Deepak Kumar, V. B. Singh
Fuzzy Software Release Problem with Learning Functions for Fault Detection and Correction Processes

Present-day software advancement has turned into an exceptionally difficult errand. The real difficulties included are shorter life cycles, expense invades and higher software quality desires. Notwithstanding these difficulties, the software developers have begun to give careful consideration on to the process of software advancement, testing and reliability analysis to bolster the procedure. A standout amongst the most essential choices identified with the software improvement is to focus the ideal release time of software. Software advancement procedure includes part of instabilities and ambiguities. We have proposed a software release time problem to overcome such vulnerabilities and ambiguities utilizing a software reliability growth model under fuzzy environment. Further we have talked about the fuzzy environment system to take care of the issue. Results are outlined numerically. Taking into account model and system, this paper particularly addresses the issue of when to release software under this condition.

Deepak Kumar, Pankaj Gupta
Reliability Assessment of Component-Based Software System Using Fuzzy-AHP

Software reliability is one of the most commonly discussed research issues in the field of software engineering. Reliability of software concerns both the maker and the buyer of the software. It can be defined as a collection of attributes that check the capability of software to assure the needed performance in given conditions for a particular span of time. Most of the estimation models proposed till date have focused only on some conventional factors internal to the software. In this paper, we try to analyze the reliability of software using a FAHP approach. The proposed model considers the factors external to a component-based software that affects its reliability. Analysis shows that by considering these factors a more efficient model for estimating reliability in CBSE systems.

Bhat Jasra, Sanjay Kumar Dubey
Ranking Usability Metrics Using Intuitionistic Preference Relations and Group Decision Making

The popularity of a Web site depends on the ease of usability of the site. In other words, a site is popular among users if it is user-friendly. This means that quantifiable attributes of usability, i.e., metrics, should be decided through a group decision activity. The present research considers three decision makers or stakeholders viz. user, developer, and professional to decide ranking of usability metrics and in turn ranking of usability of Web sites. In this process, each stakeholder gives his/her intuitionistic preference for each metric. These preferences are aggregated using intuitionistic fuzzy averaging operator, which is further aggregated using intuitionistic fuzzy weighted arithmetic averaging operator. Finally, eight considered usability metrics are ranked. The method is useful to compare Web site usability by assigning suitable weights on the basis of rank of metrics. An illustrative example comparing usability of six operational Web sites is considered.

Ritu Shrivastava
Research Challenges of Web Service Composition

Using semantic-based Web service composition based on functional and non-functional user requests promises to enable automatic dynamic assembly of applications. Apart from many advantages of such approaches, an effective automatic dynamic semantic-based Web service composition approach is still an open problem. Publishing, discovery, and selection mechanisms as well as heterogeneity and limitations of semantic languages have a major impact on the effectiveness of service composition approaches. This paper explores the major challenges related to semantic languages, Web service publishing, discovery, and selection techniques which affect the composition of the service. Additionally, it evaluates the effectiveness of automation, dynamicity, scalability, adaptation, and management strategies of the composition approaches.

Ali A. Alwasouf, Deepak Kumar
Automation Software Testing on Web-Based Application

Agile testing is a software testing exercise, follows the rules of agile policy, and considers software improvement as a critical part like a client in the testing process. Automated testing is used to do this in order to minimize the amount of manpower required. In this paper, a traditional automation testing model has been discussed. A model has been proposed for automated agile testing, and an experimental work has also been represented on the testing of a Web application. Finally, outcomes are evaluated using the agile testing model, and there is a comparison between traditional and agile testing models.

Saru Dhir, Deepak Kumar
Automated Test Data Generation Applying Heuristic Approaches—A Survey

Software testing is a systematic approach to identify the presence of errors in the developed software Pressman (Software Engineering A Practitioners Approach, Mc Graw Hill International Edition, 2010) [1], Beizer, Software Testing Techniques, Van Nostrand Reinhold Co, New York, 1990) [2], Beizer (Black Box Testing: Techniques for Functional Testing of Software and Systems, Wiley 1995) [3]. In this paper, we explore and present the challenges in software testing and how software testing techniques evolved over the period of time. Further software testing is tedious, time-consuming, cost-ineffective and does not guarantee reliability. Automation of software testing has been an area of most research in the field. Test cases play a vital role in achieving effective testing target, but generating effective test cases is equally challenging task. Heuristic approaches have gained the attention in different fields of computer science. In this paper, we discuss the need of automation of test data generation and heuristic algorithms or techniques to implement the same. We present an extensive survey of the work done in the related field by researchers and their results.

Neetu Jain, Rabins Porwal
Comparison of Optimization Strategies for Numerical Optimization

According to the need and desire, various optimization strategies have been conceived and devised in past, particle swarm optimization (PSO), artificial bee colony (ABC), teacher–learner-based optimization (TLBO), and differential evolution(DE) to name a few. These algorithms have some advantages as well as disadvantages over each other for numerical optimization problems. In order to test these algorithms (optimization strategies), we use various functions which give us the idea of the situations that optimization algorithms have to face during their operation. In this paper, we have compared the above-mentioned algorithms on benchmark functions and the experimental result shows that TLBO outperforms the other three algorithms.

Gopal Narayanam, Kartikay Ranjan, Sumit Kumar
Sentiment Analysis on Tweets

The network of social media involves enormous amount of data being generated everyday by hundreds and thousands of actors. These data can be used for the analysis of collective behavior prediction. Data flooding from social media like Facebook, Twitter, and YouTube presents an opportunity to study collective behavior in a large scale. In today’s world, almost every person updates status, shares pictures, and videos everyday, some even every hour. This has resulted in micro-blogging becoming the popular and most common communication tool of today. The users of micro-blogging Web sites not only share pictures and videos but also share their opinion about any product or issue. Thus, these Web sites provide us with rich sources of data for opinion mining. In this model, our focus is on Twitter, a popular micro-blogging site, for performing the task of opinion mining. The data required for the mining process is collected from Twitter. This data is then analyzed for good and bad tweets, i.e., positive and negative tweets. Based on the number of positive and negative tweets for a particular product, its quality gets determined, and then, the best product gets recommended to the user. Data mining in social media helps us to predict individual user preferences, and the result of which could be used for marketing and advertisement strategies to attract the consumers. In the present world, people tweet in English and regional languages as well. Our model aims to analyze such tweets that have both English words and regional language words pronounced using English alphabets.

Mehjabin Khatoon, W. Aisha Banu, A. Ayesha Zohra, S. Chinthamani
Metadaten
Titel
Software Engineering
herausgegeben von
Prof. Dr. M. N. Hoda
Prof. Dr. Naresh Chauhan
Prof. Dr. S. M. K. Quadri
Prof. Dr. Praveen Ranjan Srivastava
Copyright-Jahr
2019
Verlag
Springer Singapore
Electronic ISBN
978-981-10-8848-3
Print ISBN
978-981-10-8847-6
DOI
https://doi.org/10.1007/978-981-10-8848-3