Skip to main content

2018 | Buch

Quality, IT and Business Operations

Modeling and Optimization

herausgegeben von: Prof. P.K. Kapur, Prof. Dr. Uday Kumar, Prof. Ajit Kumar Verma

Verlag: Springer Singapore

Buchreihe : Springer Proceedings in Business and Economics

insite
SUCHEN

Über dieses Buch

This book discusses action-oriented, concise and easy-to-communicate goals and challenges related to quality, reliability, infocomm technology and business operations. It brings together groundbreaking research in the area of software reliability, e-maintenance and big data analytics, highlighting the importance of maintaining the current growth in information technology (IT) adoption in businesses, while at the same time proposing process innovations to ensure sustainable development in the immediate future. In its thirty-seven chapters, it covers various areas of e-maintenance solutions, software architectures, patching problems in software reliability, preventive maintenance, industrial big data and reliability applications in electric power systems.

The book reviews the ways in which countries currently attempt to resolve the conflicts and opportunities related to quality, reliability, IT and business operations, and proposes that internationally coordinated research plans are essential for effective and sustainable development, with research being most effective when it uses evidence-based decision-making frameworks resulting in clear management objectives, and is organized within adaptive management frameworks. Written by leading experts, the book is of interest to researchers, academicians, practitioners and policy makers alike who are working towards the common goal of making business operations more effective and sustainable.

Inhaltsverzeichnis

Frontmatter
A Conceptual Architectural Design for Intelligent Health Information System: Case Study on India

The Indian health system is becoming very complex and large and faces a lack of developed policy, time bound and real-time solutions, and tracking and absence of advanced data collection and data analysis technologies. The problem becomes more complex when collected data are merged and analysed. These merged data result in the categories of big data having more dimensions which need more sophisticated approaches and intelligent systems for getting useful information to be used further in policy and decision making. An intelligent health system would result in better health policy making, execution, and faster modification if something is not right. The objective of this chapter is to present the new framework in the health care sector and enhance the idea of innovative discussions on how government schemes approach big data analytics to develop a public health system. This chapter first discusses the health sector problem in India and analyses the solution with the integration of machine learning and big data analytics approaches. It also proposes an intelligent architecture of a machine learning framework for developing accurate, effective, decentralised, and dynamic health insights for policy decision making to resolve health-related issues.

Sachin Kumar, Saibal K. Pal, Ram Pal Singh
A General Framework for Modeling of Multiple-Version Software with Change-Point

Software has become an integral part of our daily routine. In the technology-driven world, reliable software are needed to maintain the pace in this modern era. Providing a reliable software in a short interval of time for fulfilling users’ requirements has become a tedious task for software developers. To resolve this issue of fast delivery of software, firms are now releasing software in multiple versions. In multi-upgradations of software, remaining bugs of the previous release are treated along with the bugs of the new release. During the software development process, firm changes the testing strategy resulting in a change in fault detection rate. The clock time at which the failure detection rate changes is known as change-point in software reliability literature. A large number of SRGMs are presented and evaluated considering various characteristics of software during the last 30 years of hiatus. Almost all SRGMs have been used extensively in the literature for reliability estimation, evaluation, and appraisal of the reliability growth of software. To the best of our knowledge, the concept of change-point has been widely discussed with respect to fault detection/removal process of single release software only. In the proposed work, we extend the idea of change-point from single release to multi-release by proposing a generalized modeling framework. Furthermore, we have used generalized modified Weibull distribution for the defect assessment. Numerical example consisting various criteria for goodness of fit, viz., MSE, Bias, Variance, and RMSPE, and coefficient of determination are included to clarify the degree of agreement of the presented model based on a real and experimental set of failure data for multiple releases.

Gaurav Mishra, P. K. Kapur, A. K. Shrivastava
An Improved Scoring System for Software Vulnerability Prioritization

A number of software vulnerabilities are detected during the software life cycle. Some vulnerabilities are critical and require immediate analysis and plan for their fixation, while the ones with a low damage potential can be left unattended for some time while fixing the more critical ones. Prioritization of vulnerabilities helps in determining order of vulnerability response for increased efficiency and effective utilization of resources. Existing prioritization techniques are static in their approach, and the score once generated remains associated with the vulnerability. However, the impact of the vulnerability will vary over a period of time. In this paper, we proposed a dynamic scoring system for vulnerability prioritization that takes into account two temporal attributes, namely, vulnerability index and remediation level which significantly affects the severity of a vulnerability.

Ruchi Sharma, R. K. Singh
Analysis of Feature Ranking Techniques for Defect Prediction in Software Systems

Software quality is an important parameter, and it plays a crucial role in software development. One of the most important software quality attributes is fault proneness. It evaluates the quality of the final product. Fault proneness prediction models must be built in order to enhance software quality. There are various software metrics which help in software modeling, but it is a cumbersome and time-consuming process to use all of them. So, there is always a need to select those set of software metrics which help in determining fault proneness. Careful selection of software metrics is a major concern, and it becomes crucial if the search space is too large. This study focuses on the ranking of software metrics for building defect prediction models. Hybrid approach is applied in which feature ranking techniques are used to reduce the search space along with the feature subset selection methods. Classification algorithms are used for training the defect prediction models. The area which is under the receiver operating characteristic curve is utilized for evaluating the performance of the classifiers. The experimental results indicate that most of the feature ranking techniques have almost similar results, and automatic hybrid search outperforms all other feature subset selection methods. Furthermore, the result helps us to focus only on those set of metrics which have almost the same impact on the end result as compared to the original set of metrics.

Sangeeta Sabharwal, Sushama Nagpal, Nisha Malhotra, Pratyush Singh, Keshav Seth
Analysis of Impediments to Sustainability in the Food Supply Chain: An Interpretive Structural Modeling Approach

Increasing consumer consciousness along with urbanization, trade globalization, and agro-industrialization in the last few years has led to an exponential growth of sustainability in food supply chains (FSCs). In order to withstand competition, manufacturing enterprises are implementing proactive strategies to accelerate their sustainability performance. However, there are many barriers in the effective execution of sustainable FSC in India. Understanding the impact of these barriers will help the manufacturers in effectively utilizing their resources and attaining an environmental and social FSC. The main focus of the paper is to identify the dominant barriers, which create hindrance in the adoption of the sustainability in the Indian food industry. Further, the relationship between the barriers is defined, and the most dominant barriers are classified from the suggested barrier list using interpretive structural modeling (ISM). The outcome of ISM has been taken as an input for MICMAC analysis, which classifies the barriers based on their driving and dependence power. The proposed integrated structural model will be helpful in comprehending the mutual relationships and dependencies among the barriers in the diffusion and implementation of sustainability in FSC. The suggested framework can be used as a tool by decision-makers to systematically overcome the barriers and develop strategies toward incorporation of sustainability in Indian food industry.

Jyoti Dhingra Darbari, Vernika Agarwal, Rashi Sharma, P. C. Jha
Analysis of Module-Based Software Reliability Growth Model Incorporating Imperfect Debugging and Fault Reduction Factor

This investigation deals with the module-based software reliability growth model that incorporates both imperfect debugging and fault reduction factor. The increasing number of faults may decrease the efficiency of testing and give a poor result in the form of fault reduction factor during the software testing. In order to prevent the possible faults, software developer must verify the software for all possible faults in the testing period. To characterize the environmental factors during the testing process, we consider a fault reduction factor into imperfect debugging environment. In the present study, we assume that the complex software system is divided into a module-based software system wherein each module consists of different types of faults and having different failure rates and characteristics. For each module, three-stage processes (observation, isolation, and removal process) are considered. The main objective of our investigation is to minimize the expected maintenance cost of the software subject to reliability constraints. The analytical expressions for various performance indices for the reliability assessment of the software are derived. Numerical results are facilitated for the validity of the analytical results and examining the effects of system descriptors on reliability indices.

Madhu Jain, Anuradha Jain, Ritu Gupta
Analytics for Maintenance of Transportation in Smart Cities

Cities typically face a wide range of management and maintenance problems. They are complex environments in which digital technologies are more and more pervasive; this digitization of urban environment provided a scope for enriched environment that has capability for data-driven methods. The connections and exchange of data increase and the need for data acquisition, processing, and management become an extremely important added value to the community. The inclusion of digitization and incorporation of predictive analytics provide a base for a sustainable smart city. This work considers an overview of different challenges that utilizes different technologies within a smart city maintenance with respect to transportation. A conceptual framework is proposed to handle the generated data for decision for control, monitoring, fault diagnosis, and maintenance of more and more complex systems.

Adithya Thaduri, Ajit Kumar Verma, Uday Kumar
Business Strategy Prediction System for Market Basket Analysis

As per the today’s scenario, the current technology of modern trend is required to improve the performance by minimum effort, to find more valuable items, and to extract precious information for industry people from large dataset efficiently that contains sales transactions (e.g., collections of items bought by customers or details of a website frequentation). We are proposing novel approach Business Strategy Prediction System for Market Basket Analysis. It is to find that all existing algorithms are working to find the minimal frequent item set first, but here with the help of those methods, we are finding the maximal item set. When this algorithm encountered on dense data which having the large numbers of long patterns emerge that will give the more accurate and effective result which specify all of the frequent item sets.

Sumit Jain, Nand Kishore Sharma, Sanket Gupta, Nitika Doohan
Defending the OSN-Based Web Applications from XSS Attacks Using Dynamic JavaScript Code and Content Isolation

Online social networks (OSNs) are continuously suffering from the plague of cross-site scripting (XSS) vulnerabilities. This article presents a contemporary XSS defensive framework for the OSN-based web applications that is completely based on the context type qualifier. The proposed framework executes in two key phases: Context-Aware Sanitization Generator (CASG) and Context-Aware Dynamic Parsing (CADP). The former phase performs the static analysis of HTML document to determine the context of the untrusted JavaScript code. In addition to this, it also injects the context-sensitive sanitizers in the location of the untrusted JavaScript code. The later phase performs the dynamic parsing of HTML document generated by the first phase. The main objective of this phase is to determine the context of the untrusted malicious script code that is statically ambiguous to identify in the first phase. It also performs the sanitization depending on the context identified. The testing and evaluation of proposed framework was done on a tested suite of real-world OSN-based web applications (e.g., HumHub and Elgg). The experimental results revealed that the proposed framework is capable of implementing auto-context aware sanitization on the untrusted JavaScript malicious code with less number of false positives and false negatives. Evaluation outcomes also revealed that the technique has accomplished the untrusted malicious JavaScript code isolation in the HTML document generated by OSN-based web applications for mitigating the effect of XSS worms with less dynamic runtime overhead.

Pooja Chaudhary, B. B. Gupta, Shashank Gupta
Development of QFD Methodology

QFD analysis of the installation process of the gearbox locking mechanism for “Nissan” cars (“Garant Consul”) with account of the competition factors and the Weber-Fechner law.

Frolova Elena, Albina Gazizulina, Elena Eskina, Maria Ostapenko, Dmitriy Aidarov
Discrete-Time Framework for Determining Optimal Software Release and Patching Time

Business performance of almost every organization throughout the world depends on information technology. Hence, there is huge requirement for reliable and good-quality software. Rigorous testing improves reliability but costs a lot and delays the release. Good and strong releases lead to good reputations and profitability, while late or fault ridden releases increase costs and harm branding. To avoid delay in software release, new software companies bring release early and keep fixing the remaining faults in operational phase by bringing new patches. A patch is a small software code which is designed to update a computer program, to fix or improve it.A continuous model has limited application in various real-life problems working on discrete-time data. Hence, a discrete-time model has been formulated to obtain the software release and patching time. In this paper an optimization problem has been formulated for determining the optimum time of software release and patch release to minimize the overall testing cost and maximize reliability. Numerical illustration based on real-life tandem computers data set using logistic distribution is provided to validate the proposed framework.

Anshul Tickoo, P. K. Kapur, A. K. Shrivastava, Sunil K. Khatri
EFA-FTOPSIS-Based Assessment of Service Quality: Case of Shopping Websites

In today’s competitive world, managing service quality by shopping websites is very important. In consideration of the technological advancement, this paper explores the dimensions affecting the service quality of shopping websites and their relative ranking. Insights from the literature have been taken to identify different items of service quality followed by peer group discussion to reveal the appropriate items in the present research context. Proven statistical methods have been used to explore the dimensions like exploratory factor analysis (EFA). Fuzzy technique of order preference by similarity to ideal solution (FTOPSIS) has been used for comparing the service quality of shopping websites due to the complexity and uncertainty in this research. Present research provides a unique approach for ranking different shopping websites on the basis of their service quality provided. The findings of this paper are very relevant and provide research directions and guidelines for improvement in the respective dimensions.

Vivek Agrawal, Akash Agrawal, Anand Mohan Agrawal
Finding Efficiency in Data Envelopment Analysis Using Variable Reduction Technique

Data envelopment analysis is one of the multi-criteria techniques used for finding efficiency of different decision-making units (DMUs) based on value of inputs consumed and outputs produced. The efficiency of considered DMU is determined by optimizing ratio of weighted sum of outputs to the weighted sum of inputs. Traditional DEA model differentiates between efficient and inefficient DMUs based on their calculated efficiency value. A DMU is efficient if its efficiency value is one. However, there are cases where this differentiation becomes difficult with large number of inputs and outputs, in comparison with number of DMUs. In such scenario, most of DMUs become efficient since calculated efficiency value comes out to be 1. Hence, variable reduction technique is used in DEA model to aggregate some of the inputs and outputs so that the rule of thumb is satisfied. This way discriminating power of DMU model is enhanced and differentiation becomes evident. A numerical example is also considered to show the utility of the model.

Seema Gupta, K. N. Rajeshwari, P. C. Jha
Fire Safety Experimental Investigations of Time to Flashover as a Function of Humidity in Wood

The fire in Laerdalsoyri, Norway, on 18–19 January 2014, developed faster than the fire fighters could handle, and strong winds quickly spread the fire to neighbor houses and 150 m downwind to distant houses. 36 modern buildings and 4 historic buildings of cultural heritage were lost. The cold low relative humidity air in the deep valley dried the structures and resulted in rapid growth of fire and fire spread. This has triggered studies to understand as to how early the flashover is reached when fuel moisture content (FMC) in wood goes to low levels especially in winter when heating inhabited structures is a necessity. A study of flashover as a function of the FMC in the wood has been carried out by conducting experiments on approximate equivalent of ¼ ISO rooms in the laboratory. The relative humidity and temperature in the ambient air are also noted. It is observed that low FMC is the main factor that leads to early flashover. The temperatures rise very rapidly once flashover is reached with higher heat release and increased radiated heat. Hence, it can be said that lower humidity levels that tend to be the fact in winter can lead to fast development of fire. This has to be borne in mind, and necessary precautions are to be taken to control the development and spread of fire and reduce the risk of major fire accidents.

Ajit Srividya, Torgrim Log, Arjen Kraaijeveld
Fixing of Faults and Vulnerabilities via Single Patch

Users’ demand of reliable software in zero time has made the software development more complex. If software industry fails in fulfilling the demands, then it may undergo big penalties and revenue loss. The developers are pressurized subject to resource constraints provided by the management. Despite such fact, software experiences various validation (testing) processes before its release; faults and vulnerabilities are still left undetected that later lack the quality of the product. The only feasible solution for resisting from the lack after the release of software is patch development. Generally, the patches developed for fixing faults and vulnerabilities are a separate process which requires extra resources that increases the total development cost and time. In this paper, we have proposed a cost framework that solves the problem of optimizing the patch release time with two different approaches. Here, the first approach has considered the release of a single patch that fixes both faults and vulnerabilities jointly. As the severity of vulnerabilities is much higher than the faults, the second approach considered the release of two patches where the first patch has fixed both faults and vulnerabilities jointly and other patch specifically fixed only vulnerabilities. The detailed illustration of the method is presented in the proposed paper. The case study is presented at the end for the validation purpose.

Yogita Kansal, Uday Kumar, Deepak Kumar, P. K. Kapur
LC-ELM-Based Gray Scale Image Watermarking in Wavelet Domain

The applicability of local coupled extreme learning machine (LC-ELM) onto gray scale image watermarking based on discrete wavelet transform (DWT) is described in this work. The learning ability along with generalization toward noisy datasets is examined on synthetic datasets by Y. Qu. (Neural Comput Appl. doi 10.1007/s00521-013-1542-4, 2014). Motivated by the work of (Neural Comput Appl. doi 10.1007/s00521-013-1542-4, 2014), LC-ELM is successfully applied onto image watermarking to test the imperceptibility, and resistance against image processing operations verifies the robustness. Image datasets formed using the selected blocks of approximate subband based on fuzzy entropy are supplied as input to LC-ELM in the training procedure. The binary watermark is embedded into the predicted value obtained using the nonlinear estimation function obtained through trained LC-ELM. The generalization performance of LC-ELM against noisy datasets onto image watermarking is examined by successful extraction of watermark against a number of image operations such as filtering median, average filtering, compression based on JPEG, contrast enhancement, scaling cropping, etc. on different textured images.

Rajesh Mehta, Virendra P. Vishwakarma
Implementing and Evaluating R-Tree Techniques on Concurrency Control and Recovery with Modifications on Nonspatial Domains

Upon review of the present applications that work on use of database for spatial data, it is identified that the same needs are to be incorporated in the database management systems for better support on these products. This research talks about one of those techniques in the context of handling spatial data incorporating its nonspatial element. Spatial objects are mostly handled via a minimum bounding box in most popular spatial access methods. This kind of generalization and approximation is fast but inaccurate for answers to queries. Many researchers have already worked on finding better minimum geometrical shape for a spatial object. This research takes it further and implements one such better method minimum binding circle (MBC). Apart from R-link tree, no other research has been done in incorporating a nonspatial element into the spatial object. In R-link too it is at a very fundamental level wherein the element inserted is a logical sequence number to be used for sequencing the nodes in the tree and has no relevance in the database. However, this research takes it further and introduces NS link tree (nonspatial) with minimum bounding circle and adding relevant nonspatial data at each point to reduce the number of query results to the database, thereby proving that the links to the database based on the queries are considerably reduced. Concurrency control is maintained through a priority queue. Separate log files are used to handle recovery.

Rucche Sharrma, Amit Gupta
Inventory Decisions for Imperfect Quality Deteriorating Items with Exponential Declining Demand Under Trade Credit and Partially Backlogged Shortages

In the current time frame where excessive competition exists among various enterprises, the trade credit policy has been proven as a crucial instrument for monetary development among enterprises. The main advantage of using delay period is that it helps to have savings in the purchase as well as opportunity cost. Moreover, to surpass the extreme rivalry, companies have to construct an optimum strategy that increases their market value and maximizes their ultimate profit. In view of this, the production systems are built for smooth and continuous operation; however the possibility of discrepancy in production system cannot be removed entirely. As a result, each manufactured/procured lot may contain a portion of defective items that can differ from one process to another. Also, the condition is more vulnerable when the products are prone to deterioration. Nevertheless, by vigilant inspection process, the defectives can be separated from the perfect batch. Therefore, to include screening process is requisite as the market is entirely slanted toward customer. Thus, the present model is developed by keeping the above scenarios in mind. The formulated inventory model for a retailer examines the optimal shortage point and cycle length considering imperfect quality and deterioration with trade credit. Shortages are permitted and are backlogged partially. It is also considered that demand of a product has an exponential declining rate, and the rate of backlogging has an inverse relation with the waiting time interval for the subsequent replenishment. In addition, a numerical example is presented to exemplify the model, and further, the sensitivity analysis is carried out that provides essential decision-making implications.

Chandra K. Jaggi, Prerna Gautam, Aditi Khanna
Maintenance in the Era of Industry 4.0: Issues and Challenges

The fourth generation of industrial activity enabled by smart systems and Internet-based solutions is known as Industry 4.0. Two most important characteristic features of Industry 4.0 are computerization using cyber-physical systems and the concept of “Internet of Things” adopted to produce intelligent factories. As more and more devices are instrumented, interconnected and automated to meet this vision, the strategic thinking of modern-day industry has been focused on deployment of maintenance technologies to ensure failure-free operation and delivery of services as planned.Maintenance is one of the application areas, referred to as Maintenance 4.0, in the form of self-learning and smart system that predicts failure, makes diagnosis and triggers maintenance. The paper addresses the new trends in manufacturing technology based on the capability of instrumentation, interconnection and intelligence together with the associated maintenance challenges in the era of collaborative machine community and big data environment.The paper briefly introduces the concept of Industry 4.0 and presents maintenance solutions aligned to the need of the next generation of manufacturing technologies and processes being deployed to realize the vision of Industry 4.0.The suggested maintenance approach to deal with new challenges due to the implementation of industry 4.0 is captured within the framework of eMaintenance solutions developed using maintenance analytics. The paper is exploratory in nature and is based on literature review and study of the current development in maintenance practices aligned to industry 4.0.

Uday Kumar, Diego Galar
Modeling Fault Detection Phenomenon in Multiple Sprints for Agile Software Environment

The information technology industry has gone through a revolutionary change in the last two decades. In today’s fast changing business environment, the IT organizations have to be agile and responsive to cater to the needs of the customers. The objective is not just to deliver quickly but also to embrace the change without having any adverse impact on the project. The requirements of the end customers are fast changing and get evolved over a period of time as these are directly aligned with the market needs. This has led the organizations to adopt “Agile” approach based on “lean” principles over the conventional software development life cycle (SDLC) approach. In “Agile” framework, the customer works in collaboration with the project team in prioritizing the requirements. The implementation is done through “Scrum” methodology, having multiple “sprints,” and each sprint has a “working software” as a deliverable. This approach has substantially reduced the “time to market” as the customer can decide which features of the software they would like to be delivered on a priority basis. The release of sprints is similar to multi-releases of a software in which software is tested rigorously to detect the underlying faults at the end of each sprint and remaining number of fault of each sprint is taken forward for the next sprint. Hence to model the fault detection phenomenon and their trend in each sprint, software reliability growth modeling has been used. In the current work, we are using software reliability growth models (SRGMs) to find out the trend over the sprints that could ultimately define the overall quality of the software. Numerical illustration is mentioned at the end of the paper for model validation.

Prabhanjan Mishra, A. K. Shrivastava, P. K. Kapur, Sunil K. Khatri
Optimal Price and Warranty Length for Profit Determination: An Evaluation Based on Preventive Maintenance

Warranty is a two-sided coin: on one hand, it results in additional cost to the producer, and on the other hand, it acts as a protectional tool for buyers. To deal with this additional cost (cost of repairing faulty items), preventive maintenance during the warranty period often plays a helping hand to the manufacturer. The role of preventive maintenance is therefore important, as it slows down the rate of system degradation. In view of this, the current study presents an analytical approach to determine the optimal profit for the firm, where sales price and warranty length act as key decision variables under the impact of preventive maintenance provided by the firm. The two-dimensional innovation diffusion model has been utilized to estimate the sales, and Weibull distribution has been used to represent the lifetime distribution of the product. To validate the accuracy of the proposed framework, numerical illustration has been provided for analysis done on real-life sales data set.

Adarsh Anand, Shakshi Singhal, Saurabh Panwar, Ompal Singh
Six Sigma Implementation in Cutting Process of Apparel Industry

The present competitive market is focusing on industrial efforts in producing high-quality products with the lowest possible cost. In every real-life system, there are a number of factors that cause disturbance in the process performance and their output. Process improvements through minimizing or removing such factors provide advantages such as reduced wastage or re-machining and improved market share. To help in accomplishing these objectives, various quality improvement philosophies have been put forward in recent years that can maximize the quality characteristics to ensure the enhancement of product and process. Six Sigma is an emerging data-driven approach that uses methodologies and tools that lead to improved quality levels and fact-based decision-making. This paper presents the application of the Six Sigma methodology to reduce defects in a cutting process of a garment manufacturing company in India, which is concluded through an action plan for improving product quality level. The define–measure–analyze–improve–control (DMAIC) approach has been followed here to solve the underlying problem of reducing defects and improving sigma level through continuous improvement process. The process helps in establishing specific inspection methods adapted for defect type which causes maximum rejection and to prevent their appearance in product.

Reena Nupur, Kanika Gandhi, Anjana Solanki, P. C. Jha
Forecasting of Major World Stock Exchanges Using Rule-Based Forward and Backward Chaining Expert Systems

Nowadays, share price forecasting has been considered as vital financial problem and received a lot of attention from financial analyst, researchers, brokers, stock users, etc. For the last couple of years, the research in stock market field has grown to a large extent. Therefore, artificial intelligence (AI) is a well-known and more popular area to this field. In AI, specifically expert system (ES) is a thought-provoking technique which has mystery to mimic human abilities to solve particular problems. This study mainly employs forward chaining- and backward changing-based expert system inference approaches to forecast the behavior of major stock exchanges like India, China, the USA, Japan, etc. However, the various financial indicators like inflation rate, foreign direct investment (FDI), gross domestic product (GDP), etc. have been considered to build the expert knowledge base. Moreover, the common LISP 3.0-based editor is used for expert system testing. Finally, experimental results show that backward chaining performs well when a number of rules and facts are large as compared to forward chaining approach.

Sachin Kamley, Shailesh Jaloree, R. S. Thakur
Performance Enhancement of AODV Routing Protocol Using ANFIS Technique

The moving nodes of mobile ad hoc networks (MANETs) can wander arbitrarily and thus therefore form dynamic topologies. This dynamic nature of MANET characteristics affects the communication process. In MANET’s communication process, the packet may also be delayed due to various reasons like congestion, link failure, and power failure. This constant update of variables leads to development of routing algorithms that consider delay and hop count for routing decision. Therefore, an improvement of AODV routing algorithm using adaptive fuzzy logic system is proposed in this paper. In the presented algorithm, each node will calculate its cost value based on input parameters, i.e., hop count and delay. Further, with the routing decision, optimal path route is selected based on minimum cost value. This leads to better utilization of the network in terms of packet delivery ratio and end-to-end delay. NS2.35 is used for simulation process and results show the better performance of proposed AAODV algorithm than standard AODV.

Vivek Sharma, Bashir Alam, M. N. Doja
Preservation of QoS and Energy Consumption-Based Performance Metrics Routing Protocols in Wireless Sensor Networks

The selection of routing protocols is playing a major role in the device and proper utilization of wireless sensor networks (WSNs) that can help to provide better interconnection networks. On preserving the role of key performance, metric related to routing protocols is also an important and challenging task in the WSNs. Therefore, the sensor nodes life can be improved in efficient manner by using routing protocols that preserve the energy consumption and quality of service (QoS). Three different kinds of categories are used in the design of routing protocols which are data centric, hierarchical, and location-based approaches. Energy consumption can be improved by reducing the effect of redundancy level and transferring the passive motes sleep mode. The data transmission time also impacts on the energy conservation and QoS in WSNs. The surveillance and monitoring-related applications are the demanding research field in the WSNs that has to resolve the issue QoS routing in the manner of reliable data delivery. Energy consumption and QoS constraints are two complementary factors in the routing protocol, but the time delay factor can be involved in such a manner to satisfy both the performance metrics in routing protocol in WSNs.

Ram Bhushan Agnihotri, Nitin Pandey, Shekhar Verma
Reliability Analysis for Upgraded Software with Updates

In today’s continuous fluctuation market scenario, no software comes in single version. Competition and survival requirement has led firms to come up with upgraded version of the parent software as soon as possible. Testing these software(s) for reliability has been a cumbersome task for their developers, and the task is all the more tedious when dealing with successive versions. Highly reliable software requires thorough debugging throughout the testing as well as in the operational phase, and as a consequence, the role of updating (patching) implicitly comes in picture. With patching, the overall testing period definitely increases, but it also results in enhanced usability and overall performance of the system. Consequently, a large number of firms are employing updating strategies to gain competitive advantage over its rival firms. These updates help the firms to look after any ambiguity (if present) and overcome the functional issues of the software. In this paper, making use of convolution methodology, we have proposed a mathematical approach for keeping a check on the reliability of the upgraded software incorporating the concept of update. The proposed model incorporates this varied aspect in the fault removal under multi-releases, and thereby a procedural approach based on differing performance during the testing and operational environment is the unique aspect of the article. Further to supplement the results, numerical analysis has been done on real software failure data.

Adarsh Anand, Subhrata Das, Deepti Aggrawal, P. K. Kapur
Quantitative Software Process Improvement Program Using Lean Methodology

Organizations often observe that a process improvement journey is quite long (1 ~ 3 years) and find it difficult to demonstrate quantitative benefits until the journey is complete. At times, demonstrating quantitative benefits even after completion of process improvement program is challenging. Also, the practitioners start experiencing improvements and realizing the benefits much later in the journey. Seldom the program benefits are determined and communicated in the organization due to lack of methods to quantify. These characteristics of process program design inhibit process improvement programs in the organization, due to its inability to garner management and practitioners’ support.In this context, this paper presents our experience from quantitative-based process improvement programs using lean methodology. Significance of a quantitative-based process improvement program while designing the program and integrating quantitative-based approach in the process improvement methodology is being discussed. Design of such program has resulted in demonstrating benefits and gaining support from management and acceptance from practitioners.

Mahesh Kuruba, Prasad Chitimalla
Selection of Optimal Software Reliability Growth Models: A Fuzzy DEA Ranking Approach

Over the last 40 years, many software reliability growth models (SRGMs) have been proposed to estimate the reliability measures such as remaining number of faults, software failure rate, software reliability, and release time of software. Selection of an optimal SRGM for use in specific case has been an area of interest for the researchers. Techniques available in the software reliability literature can’t be used with high confidence as they do not provide complete picture about the best suitability of the SRGM for a given real date set. In this paper, we have developed a ranking method to rank SRGM based on fuzzy data envelopment analysis (DEA) approach and then applied it for ranking of SRGMs. The first approach to rank these SRGMs is converting the model parameters set given into linear programming problem by extending CCR model to fuzzy DEA model based on credibility measure level. Since the ranking method involves a fuzzy function, a fuzzy simulation is designed and embedded into genetic algorithm (GA) to establish an algorithm. Finally, numerical example is given to demonstrate the applicability of the proposed fuzzy DEA approach-based ranking method on a real data set.

Vijay Kumar, V. B. Singh, Ashish Garg, Gaurav Kumar
Significance of Parallel Computation over Serial Computation Using OpenMP, MPI, and CUDA

The need of fast computers to perform multiple works simultaneously in less time is increasing day by day. In serial computation, tasks are performed one by one which takes more time. In parallel computing various processors work simultaneously to solve a problem.Parallel computing (Segel HJ, Jamieson LH, Guest editors’ introduction parallel processing. Comput IEEE Transact C-33(11):949–951, 1984; Adams NM, Kirby SPJ, Harris P, Clegg DB, A review of parallel processing for statistical computation. Stat Comput Springer 6(1), 1996) is the concurrent use of various processors to solve a single problem. A sequential problem can easily be converted into a parallel if it contains some independent sets of instructions which can be executed on different processors at the same time. For example, if a problem consists of n number of steps independent of each other and there are n processors too. So, it will take O (1) of time while serial means only processor will take O (n) time (assume). There are some factors involved in parallel computation like load balancing, synchronization, communication overhead, etc. which can affect the overall time. So choosing number of processors is a prominent issue. In this paper, three programming models for parallel computation are introduced, namely, OpenMP, MPI, and CUDA. Also it is described in the paper that how parallel programming is different from serial programming and the necessity of parallel computation.

Shubhangi Rastogi, Hira Zaheer
Software Release and Patching Time with Warranty Using Change Point

Software reliability growth models (SRGMs) are supportive for software developers and well acknowledged by industry experts. Researchers have examined the reliability growth of software during testing and operational phases and projected many mathematical models for forming reliability measures of software. SRGMs have been proposed in literature to measure the quality of software and used to find optimal release time of software at minimum cost. It is a common observation that as the testing progresses, the fault detection and/or removal rate changes based on many factors. The point at which change in the fault detection rate takes place is known as change point. Changes in testing environment and testing strategy are some of the major reasons behind this change. Early software release increases the chance of a software product with significant number of defects due to which the manufacturer has to bear post-release cost of fixing the faults. On the other hand, late release may result in loss of market opportunity and dissatisfied customers. Nowadays, releasing early and updating by providing patches in the operational phase are in trend in software industry. Also to provide assurance for software reliability, organizations provide warranty on their product. During the warranty phase, organizations promise their customers for either repairing or replacing in case of defect encounter. Keeping all the above issues in mind, in this paper we have formulated the generalized cost model to determine the optimal software release and patch time to minimize overall cost based on change point under warranty. Numerical illustration is provided to authenticate the anticipated cost model.

Chetna Choudhary, P. K. Kapur, A. K. Shrivastava, Sunil K. Khatri
Two-Dimensional Framework to Optimize Release Time and Warranty

Today’s customer expectations to have well-developed complex enterprise level software, delivered within no time, have left developers with a dilemma as for how to attain the desired level of software reliability with shortened testing time? In this paper, we examine a two-dimensional testing time and effort-based model using Cobb-Douglas production function, for separate release and testing stop time strategy for software in order to optimize overall testing and market opportunity cost. Here we propose a generalized framework for software developers to achieve multiple objectives of minimizing overall testing and market opportunity cost, optimizing warranty length, and optimizing release and testing stop time. We make use of software reliability growth models (SRGMs) to model an average number of bugs to be detected by testers (users) during the pre-/(post)release of the software. Numerical illustration based on real-life Tandem Computers data set by considering exponential distribution along with the sensitivity of important parameters is provided to validate the proposed two-dimensional cost modeling framework.

Nitin Sachdeva, P. K. Kapur, Ompal Singh
Technological Capabilities Impacting Business Intelligence Success in Organisations

Business intelligence (BI)-driven businesses have shown high performance and are considered a high priority for many companies across the globe. Unfortunately, the rate of success of BI project is very low, as per Gartner report. This suggests that existing model fails to serve the purpose, and, as reviewed from the latest available literature, there is a lack of consensus on BI success model. BI success model has been studied from technological and organisational capability perspectives. The purpose of this research is to develop an update instrument to assess how technological capability impacts success of business intelligence in an organisation. Having the right technological capabilities is important for an organisation to realise maximum benefits from its BI investment. The study presents findings drawn from the latest secondary data which was subsequently supported by primary survey conducted across industries to seek views and opinions of individuals in managerial positions and making use of BI solutions. The survey helped develop a valid and reliable instrument for further research. The findings also help clarify the definition of BI success as per the current business requirements across the globe. For BI researches it provides a knowledge base from where they can take up further empirical research for analysis, and for project managers and BI solution developers, it serves as a guide to formulate an effective and solution development strategy.

Rekha Mishra, A. K. Saini
Testing Time and Effort-Based Successive Release Modeling of a Software in the Presence of Imperfect Debugging

The role of software is expanding rapidly in every aspect of modern life. As the life of software is short, the software developers adopt the strategy of releasing software in successive releases to survive in the competitive market. Thus, software upgradation and technology advancement have become the source of real value to the customer. But, upgrading software is a tedious process, thereby making the software complex. This complexity introduces a risk of increase in the faults in the software. At times, the testing team may not be able to remove the fault perfectly on observation of a failure, and the original fault may remain resulting in the phenomenon of imperfect debugging. This situation arises due to improper understanding and complex nature of the software. In this paper, we have incorporated the effect of imperfect debugging to develop a testing time and effort-based software reliability growth model for successive releases of a software. We have incorporated a well-known Cobb-Douglas production function to describe the behavior of testing time and effort consumed for the successive release problem of the software. The faults detected in the operational phase or left undetected during the testing of previous release are also incorporated in the next release. The proposed models have been validated on real data set of four releases. The estimated parameters and comparison criteria are also given.

Vijay Kumar, P. K. Kapur, Ramita Sahni, A. K. Shrivastava
The Role of Educational ERP in the Isomorphic Development of the Newly Started Higher Education Institutions

Due to the positive trend in the demand for higher education opportunities, new higher education institutions (universities, university colleges, or colleges) either private or public, physical, or virtual are entering into the “higher education market” and promising new edges in quality higher education.The newly started higher education institutions (HEIs) face the challenges of competing with the legacy well-established HEIs. In addition to the creation of innovative study programs, the key factor in gaining credibility is by showing isomorphic features of those legacy institutions.The institutional development of new organizations has been thoroughly studied through institutional theory. The coercive, mimetic, and normative process has been studied in HEIs.As most of the newly HEIs use one of the well-established educational enterprise resource planning (ERP) systems available in the market. The embedded processes in the ERPs define common operational features in the HEIs. Such operational similarities lay ground for the newly HEIs to gain recognition and acceptance and look similar to the legacy HEIs.This paper will highlight the role of educational management information systems (the educational ERP) in the institutional isomorphism processes in the newly started HEIs.

Said Eid Younes
When to Start Remanufacturing Using Adopter Categorization

Remanufacturing of goods involves taking the product back from the customer so as to use it as a feedstock for manufacturing new products. It provides double benefits of cost and environmental saving to the manufacturer and society, respectively. One of the major challenges faced by original equipment manufacturer (OEM) here is forecasting quantity returned by the customers and when to start remanufacturing. In this paper, we formulate a strategy for OEMs to optimize time to start remanufacturing using the adopter categorization approach as proposed by Roger. We propose an optimization model based on famous Bass diffusion model to depict not only the quantity returned in the remanufacturing process but also the amount of cost saving a manufacturer does with efficient remanufacturing. With data on 11 consumer durable products, we compare product returned under various adopter categorization time point. An application examining the diffusion of these products under remanufacturing scenario is documented to illustrate the usefulness of the proposed strategy.

Nitin Sachdeva, P. K. Kapur, Ompal Singh
Metadaten
Titel
Quality, IT and Business Operations
herausgegeben von
Prof. P.K. Kapur
Prof. Dr. Uday Kumar
Prof. Ajit Kumar Verma
Copyright-Jahr
2018
Verlag
Springer Singapore
Electronic ISBN
978-981-10-5577-5
Print ISBN
978-981-10-5576-8
DOI
https://doi.org/10.1007/978-981-10-5577-5