Skip to main content
main-content

Über dieses Buch

This book shares key insights into system performance and management analytics, demonstrating how the field of analytics is currently changing and how it is used to monitor companies’ efforts to drive performance.Managing business performance facilitates the effective accomplishment of strategic and operational goals, and there is a clear and direct correlation between using performance management applications and improved business and organizational results. As such, performance and management analytics can yield a range of direct and indirect benefits, boost operational efficiency and unlock employees’ latent potential, while at the same time aligning services with overarching goals.The book addresses a range of topics, including software reliability assessment, testing, quality management, system-performance management, analysis using soft-computing techniques, and management analytics. It presents a balanced, holistic approach to viewing the world from both a technical and managerial perspective by considering performance and management analytics. Accordingly, it offers a comprehensive guide to one of the most pressing issues in today’s technology-dominated world, namely, that most companies and organizations find themselves awash in a sea of data, but lack the human capital, appropriate tools and knowledge to use it to help them create a competitive edge.

Inhaltsverzeichnis

Frontmatter

Use of Bayesian Networks for System Reliability Assessment

Probabilistic Safety Assessment (PSA) is a technique to quantify the risk associated with complex systems like Nuclear Power Plants (NPPs), chemical industries, aerospace industry, etc. PSA aims at identifying the possible undesirable scenarios that could occur in a plant, along with the likelihood of their occurrence and the consequences associated with them. PSA of NPPs is generally performed through Fault Tree (FT) and Event Tree (ET) approach. FTs are used to evaluate the unavailability or frequency of failure of various systems in the plant, especially those that are safety critical. Some of the limitations of FTs and ETs are consideration of constant failure/repair data for components. Also, the dependency between the component failures is handled in a very conservative manner using beta factor, alpha factors, etc. Recently, the trend is shifting toward the development of Bayesian Network (BN) model of FTs. BNs are directed acyclic graphs and work on the principles of probability theory. The paper highlights how to develop BN from FT and how it can be used to develop a BN model of the FT of Isolation Condenser (IC) of the advanced reactor and incorporate the system component indicator status into the BN. The indicator status would act like evidence to the basic events, thus updating their probabilities.

Vipul Garg, M. Hari Prasad, Gopika Vinod, A. RamaRao

Predicting Code Merge Conflicts and Selecting Optimal Code Branching Strategy for Quality Improvement in Banking Sector

Code branching and merging plays a very critical role in the software development in an enterprise. Branching provides parallel development by enabling several development teams to work in isolation on multiple piece of code in parallel without impacting each other. Merging is a process to integrate the code of different teams together, which is achieved by moving the code around the branches. The process of merging can be very troublesome as it may contribute to enormous code merge or integration defects also known as code merge conflicts. One of the major problems faced by the practitioners is to predict the number of code merge conflicts and plan for the resolution of these conflicts. Another problem that is faced in an enterprise is to select an appropriate code branching strategy. Selection of a suitable code branching strategy is a multi-criteria decision making problem which involves multiple criteria and alternatives. This paper proposes a hybrid approach for predicting code merge conflicts and selecting the most suitable code branching strategy. Artificial neural network (ANN) is applied in a large enterprise to predict the code merge conflicts; thereafter analytic hierarchy process (AHP) is applied to select the most suitable code branching strategy. Total four code branching strategies have been considered in this paper. The outcome from the proposed approach successfully predicts the number of code conflicts and selects Branching Set-A as the most suitable code branching strategy with the highest priority weight of 0.287. The proposed methodology proved out to be very useful instrument for enterprises to quantitatively predict code merge conflicts and select the most suitable code branching strategy.

Viral Gupta, P. K. Kapur, Deepak Kumar, S. P. Singh

Testing the Effects of Agile and Flexible Supply Chain on the Firm Performance Through SEM

High competition, continuous, and rapid changing in consumer demands push companies finding differentiation ways to gain competitive advantage. Supply chain and logistics practices have been seen as the core strategic tools to survive for companies. In this research, the impacts of agile and flexible supply chain practices on the customer satisfaction, service quality, sales performance, and profitability are examined. As a research area, fast fashion industry was chosen. To the aim, a theoretical model was developed and tested through structural equation modeling (SEM). The results reveal that companies performing agile and flexible supply chains can reap the benefits in terms of service quality and customer satisfaction, and at the end can reap the resulting financial benefits in terms of increased sales and profits.

Mohammad Hossein Zavvar Sabegh, Aylin Caliskan, Yucel Ozturkoglu, Burak Cetiner

Analysis and Countermeasures for Security and Privacy Issues in Cloud Computing

Cloud computing is having the capacity to dispose off the prerequisites for setting up high-cost computing framework and promises to provide the flexible architecture which is accessible from anywhere. The data in the cloud computing resides over an arrangement of network resources which enables position of the requirements for setting up costly data centers framework and information to be acquired to via virtual machines, and these serves might be arranged in any piece of the world. The cloud computing environment is adopted by a large number of organizations so the rapid transition toward the clouds has fueled concerns about security perspective. There are numbers of risks and challenges that have emerged due to use of cloud computing. The aim of this paper is to identify security issues in cloud computing which will be helpful to both cloud service providers and users to resolve those issues. As a result, this paper will access cloud security by recognizing security requirements and attempt to present the feasible solution that can reduce these potential threats.

Abdul Raoof Wani, Q. P. Rana, Nitin Pandey

An Efficient Approach for Web Usage Mining Using ANN Technique

Web mining involves a huge variety of applications whose objective is to find and extract concealed information in web user data. It has provided an efficient and prompt mechanism for data access. Web mining enables us to extract out beneficial information from user’s web access. Earlier studies on the subject are based on a concurrent clustering approach. In this approach, the clustering of the requests affected the performance results. In this paper, we have introduced the Enhanced Multilayer Perceptron (MLP) algorithm, a special technique of ANN (Artificial Neural Network) to detect patterns of use. The enhanced MLP technique is better than K-mean algorithm for web log data in terms of time efficiency. The aim of understanding the enhanced MLP technique is to improve the quality of e-commerce platforms, to customize the websites and improve the web structure.

Ruchi Agarwal, Supriya Saxena

Natural Language Processing Approach to Identify Analogous Data in Offline Data Repository

There have been huge contributions to online communities and social media websites through posts, comments and blogs day in and day out. Some of this contribution is unstructured and unclassified. It is difficult to find similarities in terms of textual data in the posts as it comprises of mix of structured and unstructured data. The overall objective of this paper is to help identify similar text through natural language processing techniques. The approach has been demonstrated through linguistic features that points to similarity and use those features for the automatic identification of analogous data in offline data repository. To demonstrate the approach, we have used a collection of documents as an offline repository having similar text and a text corpus as a resource to identify analogous data. The proposed methodology processes a document against repository based on document preprocessing through lexical analysis, stop word elimination and synonym substitution check. Java data structure is used to hold and process data parsed from the file syntactic analysis is carried out with the help of WordNet™ database configured within the process. Part of speech (POS) and synonym check capabilities of WordNet API are being used in the process.

Nidhi Chandra, Sunil Kumar Khatri, Subhranil Som

Multi Release Reliability Growth Modeling for Open Source Software Under Imperfect Debugging

In recent years, Open Source Software have gain popularity in the field of the Information technology. Some of its key features like source code availability, cost benefits, external support, more reliability and maturity have increased its use in all the areas. It has been observed that that people interests are shifting from closed source software to open source software due to size and complexity of real life application. It has become impractical to develop a reliable and completely satisfied Open source software product in a single development life cycle, therefore, the successive improved version or releases are developed. These successive versions are designed to meet technological arrangements, dynamic customer needs and to penetrate further in the market. But it also give rise to new challenges in the terms if deterioration in the code quality due to modification/addition in the source code. Sometimes new faults generated due to add-ons and also the undetected faults from the previous release become the cause of difficulty in updating the software. In this paper, an NHPP based software reliability growth model is proposed for multi-release open source software under the effect of imperfect debugging. In the model, it has been assumed that the total number of faults depends on the number of faults generated due to add-ons in the existing release and due to the number of faults left undetected during the testing of the previous release. Data of the three releases of Apache, an OSS system have been taken for the estimation of the parameters of the proposed model. The estimation result for proposed model has been compared with the recently reported multi release software reliability model and the goodness of fit results shows that the proposed model fits the data more accurately and hence proposed model is more suitable reliability model for OSS reliability growth modeling.

Diwakar, Anu G. Aggarwal

Barriers to Agile Adoption: A Developer Perspective

Agile methods are one of the most widely adopted methodologies for developing software. Agile methods refer to a family of lightweight methods that tend to favor working code over documentation, individuals over tools, and collaboration over negotiation. Agile methodology proves beneficial over conventional software engineering methods in terms of time and cost. However, apprehensions of developer community toward adopting agile are an area of concern that results in barriers toward complete agile adoption. In this work, we report the barriers identified through literature survey and results of investigating the relationship that exists between observed barriers. This paper focuses on structural equation modeling that utilizes different classes of modular approaches and establishes connections among identified variables, having the fundamental objective of providing a confirmatory test of a hypothetical model. Our work demonstrates a path model through the analysis of the identified barriers faced by developers during agile adoption.

Noshiba Nazir, Nitasha Hasteer, Rana Majumdar

Toward Analysis of Requirement Prioritization Based Regression Testing Techniques

The regression testing aims to validate the quality of successive software versions along with the validation of the existing functionality. The new functionality, change requests, and implementation of delayed requirements lead to the change in the source code, and it might be possible that existing functionality may malfunction as a result of such changes. Various regression testing approaches are proposed in the literature, and this paper tries to analyze the state of the art of requirement priority based regression testing approaches. Few requirement-based approaches are identified from the literature and were analyzed for their differences in functionality and other parameters that determine their applicability for doing regression testing. The results indicate that the existing techniques employ different parameters (with requirement priority as one of the parameters) and need validation on large dataset, and the applicability of particular technique as per circumstances is still uncertain. There is lack of consensus that helps the software tester to decide which technique is better as per existing scenarios.

Varun Gupta, Yatin Vij, Chetna Gupta

Formulation of Error Generation-Based SRGMs Under the Influence of Irregular Fluctuations

Reliability growth models for software have been widely studied in the literature. Many schemes (like hazard rate function, queuing theory, and random lag function) have been proposed and utilized for modeling the fault removal phenomenon. Among these, hazard rate function technique has gained significant attention and has been excessively used for model debugging process. An essential aspect of modeling has been pertaining to reliability estimation under irregular fluctuations environment. Another major domain highlighted in Software Reliability Engineering (SRE) is that of error generation, which has been an important area of research up till now. This article shows how, using Hazard Rate Function approach, error generation concept can be studied in a fluctuating environment. The utility of the proposed framework has been emphasized in this paper through some models pertaining to different conditions. The applicability of our proposed models and comparisons in terms of goodness-of-fit and predictive validity has been presented using known software failure data sets.

Adarsh Anand, Deepika, Ompal Singh

Decision Aspect Prioritization Technique for Incremental Software

In incremental softwares, software is delivered incrementally where each increment implements some agreed high priority requirements. Priority of a requirement is decided by considering different aspects. A new technique has been proposed for the prioritization of decision aspects. The proposed technique prioritizes the decision aspects by using historical data thereby reducing the time taken in the prioritization and prioritization of decision aspects is done by the stakeholders. The technique aims to enhance the software success rate by optimal selection of decision aspects for prioritization of software requirements in an efficient manner which is not time-consuming and thus increases software’s success rate.

Varun Gupta, Siddharth Sethi, Chetna Gupta

Reliability Growth Analysis for Multi-release Open Source Software Systems with Change Point

Open source software has now become an essential part of the business for huge segment of developers to enhance their visibility in public. Many of the open source communities are continuously upgrading the software through series of releases to improve its quality and efficiency. Here in this paper, general framework is presented to model fault removal process (FRP) for multiple releases of OSS using the concept of change point on discrete probability distribution. To validate our model, we have chosen two successful open source projects-Mozilla and Apache for its multi release failure data sets. Graphs representing goodness of fit of the proposed model have been drawn. The parameter estimates and measures of goodness of fit criteria suggest that the proposed SRGM for multi release OSS fits the actual data sets very well.

Anu G. Aggarwal, Vikas Dhaka, Nidhi Nijhawan, Abhishek Tandon

Improvisation of Reusability Oriented Software Testing

The study involves the factors that are responsible for software testing and determining the extent of reusability on the basis of test outcomes. It deals with improving and promoting practices of reusability along with providing a method to improve such practices. A case study was conducted in some of the leading organizations related to reusability practices involved in developing a new software keeping in consideration the test cases generated. According to the results, the factors that emphasize the software testing process are majorly cost and time that play an efficient role in the development of software. It is also necessary to focus on test process definition, testing automation along with the testing tools.

Varun Gupta, Akshay Mehta, Chetna Gupta

Water Treatment System Performance Evaluation Under Maintenance Policies

The main determination of this work is to analyze the performance of the water treatment plant (WTP) and tries to find that which of the subpart/subparts of water treatment plant affects it. The problem that generally occurs in the WTP is based upon the poor maintenance and material used during manufacturing for its subparts. These types of the problem could be prevented if safety measures and maintenance techniques are followed properly and regularly. For analyzing WTP, pump plays very important role in supplying water to different components; thus, other machines can also perform their function as well. Along with pumps, valves also need regular maintenance for better performance. Other components also have their significance. Except this, the WTP had various components which need maintenance and replacement over a different span of time period.

Anshumaan Saxena, Sourabh Kumar Singh, Amit Kumar, Mangey Ram

Prediction of El-Nino Year and Performance Analysis on the Calculated Correlation Coefficients

El-Nino is a meteorological/oceanographic phenomenon that occurs at irregular intervals of time (every few years) at low latitudes. El-Nino can be related to an annual weak warm ocean current that runs southward along the coast of Peru and Ecuador about Christmastime. It is characterized by unusually large warming that occurs every few years and changes the local and regional ecology. El-Nino has been linked to climate change anomalies like global warming, etc. The data for this work has been taken from the websites mainly for India (Becker in Impacts of El-Nino and La Niña on the hurricane season, 2014 [1]; Hansen et al. in GISS surface temperature analysis (GISTEMP) NASA goddard institute for space studies, 2017 [2]; Cook in Pacific marine environmental laboratory national oceanic and atmospheric administration, 1999 [3]; Climate Prediction Center—Monitoring & Data [4]; Romm in Climate Deniers’ favorite temperature dataset just confirmed global warming, 2016 [5]; World Bank Group, 2017 [6]; National Center for Atmospheric Research Staff (Eds) in The climate data guide: global temperature data sets: overview & comparison table, 2014 [7]; Global Climate Change Data, 1750–2015 [8]). Data have been preprocessed using imputation, F-measure, and maximum likelihood missing value methods. Finally, the prediction has been made about the time of occurrence of the next El-Nino year by using a multiple linear regression algorithm. A comparative analysis has been done on the three approaches used. The work also calculates Karl Pearson’s correlation coefficient between global warming and temperature change, temperature change and El-Nino, and finally global warming and El-Nino. Performance analysis has been done on the correlation coefficient calculated.

Malsa Nitima, Gautam Jyoti, Bairagee Nisha

Performance of Static Spatial Topologies in Fine-Grained QEA on a P-PEAKS Problem Instance

Population-based meta-heuristics can admit population models and neighborhood topologies, which have a significant influence on their performance. Quantum-inspired evolutionary algorithms (QEA) often use coarse-grained population model and have been successful in solving difficult search and optimization problems. However, it was recently shown that the performance of QEA can be improved by changing its population model and neighborhood topologies. This paper investigates the effect of static spatial topologies on the performance of QEA with fine-grained population model on well-known benchmark problem generator known as P-PEAKS.

Nija Mani, Gur Saran, Ashish Mani

Android Malware Detection Using Code Graphs

The amount of Android malware is increasing faster every year along with the growing popularity of Android platform. Hence, detection and analysis of Android malware have become a critical topic in the area of computer security. This paper proposes a novel method of detecting Android malware that uses the semantics of the code in the form of code graphs extracted from Android apps. These code graphs are then used for classifying Android apps as benign or malicious by using the Jaccard index of the code graphs as a similarity metric. We have also evaluated code graph of real-world Android apps by using the k-NN classifier with Jaccard distance as the distance metric for classification. The result of our experiment shows that code graph of Android apps can be used effectively to detect Android malware with the k-NN classifier, giving a high accuracy of 98%.

Shikha Badhani, Sunil Kumar Muttoo

Security in ZigBee Using Steganography for IoT Communications

ZigBee is an IEEE 802.15.4-sourced arrangement for a suite of high-level communication protocols employed to generate personal area networks. ZigBee is a less price, low-complex and low power consumption wireless personal area network (WPAN) norm that targets at the extensive developments of instruments and devices with prolonged battery life that are employed in wireless controls or applications that are used for monitoring purposes. It has an extensive utilization in industries and operations that are conducted physically. Hence, ZigBee is mostly correlated with IoT and M2M. Therefore, security in these WPANs becomes a major interest and has gained a good amount of notice currently. The security methods used in these networks over a period of time usually include practices that are cryptographic in nature. Then again these practices recommended till date can have good scopes of improvements, as a result, to turn up with additional assured and protected data communication. The chapter proposes a technique to enhance security in ZigBee using steganography over the secret data being communicated between communicating parties. However in cryptographic practices, the message even if it stands strongly resilient, can stimulate doubts and, therefore, could be adequate enough for a third party, spying the systems that something that are of significant use have been exposed. Hence, to keep the security features consistent in these networks this chapter proposes a technique to protect data by means of Steganography over the data to be communicated, this allows two communicating parties to transmit covert communication through a shared route in a way with the purpose, no adversary can even discover as in the covert communication is being transferred. Hence, making use of cryptographic practices helps protecting the insides of the messages only, on the other hand using steganographic practices can help in protecting the message contents and even the fact that a covert message has been transferred. The exclusive plan is to come up with a practice that is Steganographic in nature and ultimately has a resistance to any sort of steganalysis.

Iqra Hussain, Mukesh Chandra Negi, Nitin Pandey

Implementation of Six Sigma Methodology in Syrian Pharmaceutical Companies

The increased competition in the global pharmaceutical market and the necessity to reach higher levels of quality of the pharmaceutical products force manufacturers to seek and adopt more effective and reliable quality management methods and techniques which allow them to introduce products with the highest possible quality level and reduced quality costs, while maintaining conformance to the pharmaceutical GMPs, technical and legislative requirements. One of the popular modern quality management methodologies is Six Sigma, which proved its high ability to increase business profits and competitiveness within more than 30 years of implementation in manufacturing and service sectors. Recently, Six Sigma methodology has been adopted by global pharmaceutical companies such as Baxter, Eli Lilly, Johnson & Johnson and Novartis and obtained considerable benefits from its abilities. This research aims at investigating the possibility to implement Six Sigma methodology in the Syrian pharmaceutical companies, and to find out what benefits a pharmaceutical company can get through the implementation of this methodology. We conducted a case study in a pharmaceutical company in Syria (Orient Pharma) in order to examine the effectiveness and advantages of Six Sigma methodology. For this purpose, a quality improvement project was conducted using DMAIC roadmap to enhance the quality for one of the main products of the company. The obtained results of DMAIC project showed an enhanced process capability, an enhanced process Sigma level, decreased variability in the process outputs. The main difficulties that have been observed during the study are resistance to change, lack of training, lack of necessary resources, attitude toward quality in the company. As a conclusion, considerable benefits can be obtained through implementing Six Sigma methodology in the Syrian pharmaceutical companies.

Yury Klochkov, Bacel Mikhael Alasas, Adarsh Anand, Ljubisa Papic

Developing Plans for QFD-Based Quality Enhancement

Quality Function Deployment (QFD) is a methodology for transforming customers’ wishes into quality requirements for a product, service or process. QFD methodology was originally developed by Japanese researchers, who designed the approach for transforming customers’ wishes (real or supposed) into detailed product characteristics using special matrices. QFD methodology provides better understanding of customers’ expectations in the process of design and development of products, services, and processes and helps to consider real or supposed customers’ requirements. House of Quality is used to show the relationship between customers’ requirements and product characteristics. Product characteristics are realized using appropriate technological operations and equipment. If we know the methods for quality assessment of a separate operation (Cp indices, control charts, etc.), we can complete the House of Quality with the results of technological equipment analysis. Such data integration allows the complex solution of a problem of product competitiveness improvement. Using quality assessment methods for technological equipment, we acquire knowledge about defect probability at each separate production stage. Quality Function Deployment and integration of mentioned results (amount of defects, process stability) allow approaching assessment of each product characteristic with regard to its importance for a customer as well as with regard to the technical possibility to implement it.

Dmitriy Aydarov, Yury Klochkov, Natalia Ushanova, Elena Frolova, Maria Ostapenko

Modified Counting Sort

There are various sorting methods in the literature, which are sequential in nature and have linear time complexity. But these methods are not preferred to use due to large memory requirements in specific cases. Counting sort is one, which lies in this domain. In this chapter, we have suggested an improvement on the counting sort. Due to this improvement, the memory requirement for counting sort is reduced up to a significant level. We have tested this modified counting sort on numerous data sets and the results obtained by these experiments are very much satisfactory. Results shows that this memory requirement is reduced at least 50% than traditional counting sort. So it opens up the opportunity of using this modified version in many sorting applications.

Ravin Kumar

Analysis of Existing Clustering Algorithms for Wireless Sensor Networks

With the recent advancement in MEMS technology, researchers in academics as well as in industry are showing their immense interest in Wireless Sensor Networks (WSNs) since the past decade. WSNs are the networks composed of uniformly or randomly distributed autonomous low-cost nodes used for reliable monitoring of environmental parameters. These resource-constrained sensor nodes work in a synergetic manner to perform a sensing process. Wireless Sensor Networks have a significant role in different areas like habitat monitoring, health monitoring, intelligent and adaptive traffic management, military surveillance, target tracking, aircraft control, forest fire detection, air pollution monitoring, etc. These networks face some critical energy challenges while doing data aggregation, node deployment, localization, and clustering. This chapter presents the analysis of different clustering algorithms proposed so far to lengthen the network lifetime and to increase the network scalability.

Richa Sharma, Vasudha Vashisht, Ajay Vikram Singh, Sushil Kumar

Process Mining for Maintenance Decision Support

In carrying out maintenance actions, there are several processes running simultaneously among different assets, stakeholders, and resources. Due to the complexity of maintenance process in general, there will be several bottlenecks for carrying out actions that lead to reduction in maintenance efficiency, increase in unnecessary costs and a hindrance to operations. One of the tools that is emerging to solve the above issues is the use Process Mining tools and models. Process mining is attaining significance for solving specific problems related to process such as classification, clustering, discovery of process, prediction of bottlenecks, developing of process workflow, etc. The main objective of this paper is to utilize the concept of process mining to map and comprehend a set of maintenance reports mainly repair or replacement from some lines on the Swedish railway network. To attain the above objective, the reports were processed to extract out time related maintenance parameters such as administrative, logistic and repair times. Bottlenecks are identified in the maintenance process and this information will be useful for maintenance service providers, infrastructure managers, asset owners and other stakeholders for improvement and maintenance effectiveness.

Adithya Thaduri, Stephen Mayowa Famurewa, Ajit Kumar Verma, Uday Kumar

Software Release Time Problem Revisited

With technological advancements in the Information Technology (IT) world, Software Reliability Growth Models (SRGMs) have been extensively made use of by both researchers and practitioners. To withstand the challenges posed by this exponential growth in the IT sector, researchers have propagated the need to obtain optimal software release time by optimizing overall testing cost. In this chapter, the authors suggest a novel approach to optimize release time considering the cost of fault detection and correction as distinct cost and treat them separately in the cost modeling framework. We develop testing effort-dependent SRGMs in a unified framework and thus provide for the proposed cost model validation based on real-life data.

Nitin Sachdeva, P. K. Kapur, A. K. Shrivastava

Diffusion Modeling Framework for Adoption of Competitive Brands

In order to survive in today’s competitive market, every brand/company is altering and refining its offerings at a fast pace. Market thus sees a variety of products available almost at the same time. In the midst of all the major aspects, firms need to look at how customers respond to products, which are similar looking, equally priced, and even have similar features. To cater to this understanding, the present proposal deals with the concept of brand preference. The objective of our modeling framework is to observe the shifting behavior of customers and to predict the sales level in the presence of various brands available together. Today’s market provides the customers with multiple options to choose from, thereby, taking this ideology into account, the current study is able to identify all the possible variations that might impact the overall sales of a particular product because of inter- and intra-shifting of customers amongst various brands available at the time of purchase. Validation of the model has been done on real-life car sales data for the automobile industry.

Adarsh Anand, Gunjan Bansal, Arushi Singh Rawat, P. K. Kapur

Two-Dimensional Vulnerability Patching Model

In this paper, we develop a vulnerability patching model based on the nonhomogeneous Poisson process (NHPP) with different dimensions. Here, first, we assumed that the patching of discovered vulnerabilities can also cause patching of some additional vulnerabilities without causing any patch failure. This patching model is known as one-dimensional vulnerability patching model (1D-VPM) as it is only dependent on the time at which the vulnerabilities are patched. Further, we extend the one-dimensional vulnerability patching model by introducing the number of software users as a new dimension for software patching resources. In this two-dimensional model, we assume that the effort spent by users in installing the patches plays a major role in remediating the software vulnerabilities. It does not matter how quickly the vendor releases the patch until the users installed them correctly. Hence, we develop two-dimensional vulnerability patching model with patching time and software users as a two-dimensional vulnerability patching model (2D-VPM). Cobb–Douglas production function is used to create the two-dimensional patching model. An empirical study is performed on the vulnerability patching data (for Windows 8.1) to validate and compare the proposed models.

Yogita Kansal, P. K. Kapur

A Hybrid Intuitionistic Fuzzy and Entropy Weight Based Multi-Criteria Decision Model with TOPSIS

In a scenario where decision-makers are always faced with the challenge of selecting the right technology for their IT needs posed due to the availability of multiple advanced technologies in the market and consequences related to wrong selection, Intuitionistic Fuzzy Sets (IFSs) have demonstrated effectiveness in dealing with such vagueness and hesitancy in the decision-making process. Here in this paper, we propose a hybrid IFS and entropy weight based Multi-Criteria Decision Model (MCDM) with Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method. The model helps measure the exactness and vagueness of each alternative over several criteria. An Intuitionistic Fuzzy Weighted Approach (IFWA) operator for aggregating individual decision-maker’s opinions regarding each alternative over every criterion is employed. Additionally, Shannon’s entropy method is used to measure criteria weights separately. We apply the proposed model in selection of cloud solution for managing big data projects.

Nitin Sachdeva, P. K. Kapur

The Role of Website Personality and Website User Engagement on Individual’s Purchase Intention

Purpose: The paper aims to understand the effect of website personality and website user engagement on an individual’s purchase intention in an online purchase environment. Design/Approach/Research Methodology: A total of 221 valid online questionnaires were utilized to empirically test the research model using multiple regression approach. The study sample includes online shoppers who performed shopping via Internet medium. Findings: The study deduced that there exists an impact of website personality and website user engagement on individual’s purchase intention, although the interaction differs for each of the following dimension of personality trait—solidity, enthusiasm, genuineness, sophistication, and unpleasantness. Originality/Value: The study has been conducted in National Capital Region of India. There have been very few researches conducted in the region where website personality has been taken as an important dimension to understand its role in purchase probability. Also, studies conducted otherwise, have not included the interaction of website personality and user engagement on purchase intention.

Kokil Jain, Devnika Yadav

Impact of Social Media on Society—Analysis and Interpretation

Since ages, forms of media and technology have endured drastic modification referring to transformation in time, necessities, upgradation of technology, using comfort within one’s means, availability, etc. Media aids in disseminating evidences, sensitizing, and instructing people. Social media is rendered to be the succeeding groundbreaking upheaval in the field of human communication. The research paper studies the influence of social media on the Habits and conducts of the community/public. Study is conducted to check the significance of social media on lives of various sections of populaces, causes for the advancements in social media, professional prospects accessible with this progression in social media, etc. Social media has transformed the standards of understanding, learning, interface, media habit, and usage for individual adults as well as the teenagers. Utilization of social media by businesses and working professionals for advertising, communicating, and networking is also emphasized in this paper. The prospects of latest communication trends have sown its seeds in the form of current social media disruption. With the help of this paper, we purpose to understand and decode the shifting forms of communication by the usage of social media.

Gurinder Singh, Loveleen Gaur, Kumari Anshu

Conceptual Framework of How Rewards Facilitate Business Operations in Knowledge-Intensive Organizations

Knowledge workers contribute to operations of their employers through their skills and expertise to solve complex business problems. This makes then indispensable and crucial to the success of such firms. Following the framework developed by Tsui et al. (Academy of Management 40(5):1089–1121, [12]), this paper addresses the debate regarding the effectiveness of intrinsic and extrinsic rewards in providing a climate of satisfaction in knowledge-intensive organizations. It creates propositions about the how the perception of knowledge workers changes in response to changes in the intrinsic and extrinsic components of rewards. Keeping employment relationships as backdrop and inherent work-related needs of knowledge workers into consideration, this paper makes propositions regarding the possible impact of changes in intrinsic and extrinsic rewards on ‘α’ or the perceived value of employment in unbalanced relationships. Knowledge workers are known to be more driven by characteristics of their work than extrinsic aspects of rewards. Therefore, the paper suggests that the share of intrinsic rewards must be either more or (at least) equal to that of extrinsic rewards. This paper adds to the long drawn debate between the effectiveness of intrinsic and extrinsic rewards. It draws attention to the need for organizations to focus on the existing employment relationship and the work-related needs of knowledge workers while taking reward decisions.

Shweta Shrivastava, Shikha Kapoor

Tunnel QRA: Present and Future Perspectives

With the vision of faster in-land transportation of humans and goods, long tunnels with increasing engineering complexities are being designed, constructed and operated. Such complexities arise due to terrain (network of small tunnels) and requirement of multiple entries and exits (network of traffics leading to non-homogenous behaviour). Increased complexities of such tunnels throw unique challenges for performing QRA for such tunnels, which gets compounded due to handful number of experiments performed in real tunnels, as they are costly and dangerous. A combined approach of CFD modelling of scaled down tunnels could be a relatively less resource intensive solution, nevertheless, associated with its increased uncertainties due to introduction of scaling multiplication factors. Further, with the advent of smart system designs and cheap computational cost, a smart tunnel which manages its own traffic of both dangerous goods carriers and other passenger vehicles based on continuously updated dynamic risk estimate, is not far from reality.

Jajati K. Jena, Ajit Kumar Verma, Uday Kumar, Srividya Ajit

Software Vulnerability Prioritization: A Comparative Study Using TOPSIS and VIKOR Techniques

The ever-mounting existence of security vulnerabilities in a software is an inevitable challenge for organizations. Additionally, developers have to operate within limited budgets while meeting the deadlines. So they need to prioritize their vulnerability responses. In this paper, we propose an approach for vulnerability response prioritization using “closeness to the ideal” approach. We used TOPSIS and VIKOR method in this study. Both of these techniques employ an aggregating function to achieve the ranking of desired alternatives. VIKOR method determines a compromise solution on the basis of measure of closeness to a single ideal solution while TOPSIS method determines a feasible solution while taking into account the shortest distance from the positive ideal solution and the maximum distance from negative ideal solution. Both these methods share some significant similarities and differences. A comparative analysis of these two methods is done by applying them on real-life software vulnerability datasets for achieving vulnerability prioritization.

Ruchi Sharma, Ritu Sibal, Sangeeta Sabharwal
Weitere Informationen

Premium Partner

    Bildnachweise