main-content

Über dieses Buch

This book considers all aspects of performability engineering, providing a holistic view of the activities associated with a product throughout its entire life cycle of the product, as well as the cost of minimizing the environmental impact at each stage, while maximizing the performance. Building on the editor's previous Handbook of Performability Engineering, it explains how performability engineering provides us with a framework to consider both dependability and sustainability in the optimal design of products, systems and services, and explores the role of performability in energy and waste minimization, raw material selection, increased production volume, and many other areas of engineering and production.

The book discusses a range of new ideas, concepts, disciplines, and applications in performability, including smart manufacturing and Industry 4.0; cyber-physical systems and artificial intelligence; digital transformation of railways; and asset management.

Given its broad scope, it will appeal to researchers, academics, industrial practitioners and postgraduate students involved in manufacturing, engineering, and system and product development.

Inhaltsverzeichnis

Chapter 1. Assessment of Sustainability is Essential for Performability Evaluation

Performability of a product, a system or a service has been defined by this author (Misra in Inaugural Editorial of International Journal of Performability Engineering 1:1–3, 2005 [1] and Misra in Handbook of performability engineering, Springer, London, 2008 [2]) as an attribute of the holistic performance reckoned over its entire life cycle ensuring not only high dependability (quality, reliability, maintainability and safety) but also sustainability. Sustainability is a characteristic specific to a product, system or service. At the same time, a dependable product, system or service may not be sustainable. It may also be necessary here to point out that without dependability, sustainability is meaningless. Therefore, both dependability as well as sustainability attributes should be considered and must be evaluated in order to evaluate performability of a product. All other attributes of the definition of performability have been defined and can be computed except sustainability. In order to evaluate performability, it is therefore essential to define and compute sustainability. For developing sustainable products, systems and services in the twenty-first century, it is essential that we should be able to define precisely and quantify sustainability since one cannot improve what cannot be measured or assessed. The objective of the present chapter is to understand the implications of sustainability in order to facilitate computation of sustainability and thereby the performability. The purpose of 13 chapters in the Handbook (Misra in Handbook of performability engineering, Springer, London, 2008 [2]) by the author was to provide detailed introduction to each constituent elements of the definition of performability, namely quality, reliability, maintainability, safety and sustainability, and these chapters were received very well by the international academic community as is evident from Table 1.1. This was done with the intent to evoke interest among researchers across the world in the concept of performability leading to a way to compute or assess performability. But this did not happen in the past 12 years after the publication of the Handbook in 2008. The main impediment in this effort is the procedure to evaluate sustainability.

Krishna B. Misra

Chapter 2. Performability Considerations for Next-Generation Manufacturing Systems

Globally, the manufacturing industry is gearing up for the next level of industrial revolution, and it is called smart manufacturing or Industry 4.0. This chapter aims to discuss various aspects of performability for next-generation manufacturing systems. “Intelligence” is identified as an essential dimension of performability for such systems. Various elements of this new dimension are discussed, and the associated technologies are mapped. New business models that utilize the performability of the next-generation manufacturing systems are presented. Finally, a new philosophy, namely, “Manufacturing by Mass,” is built to capitalize the full potential of intelligent factories.

Chapter 3. Functional Safety and Cybersecurity Analysis and Management in Smart Manufacturing Systems

This chapter addresses some of the issues of the integrated functional safety and cybersecurity analysis and management with regard to selected references and the functional safety standards: IEC 61508, IEC 61511, ISO 13849-1 and IEC 62061, and a cybersecurity standard IEC 62443 that concerns the industrial automation and control systems. The objective is to mitigate the vulnerability of industrial systems that include the information technology (IT) and operational technology (OT) to reduce relevant risks. An approach is proposed for verifying the performance level (PL) or the safety integrity level (SIL) of defined safety function, and then to check the level obtained taking into account the security assurance level (SAL) of particular domain, for example, a safety-related control system (SRCS), in which the given safety function is to be implemented. The SAL is determined based on a vector of fundamental requirements (FRs). The method uses defined risk graphs for the individual and/or the societal risk, and relevant risk criteria, for determining the performance level required PLr or the safety integrity level claimed SIL CL, and probabilistic models to verify PL/SIL achievable for the architecture of the SRCS considered.

Kazimierz T. Kosmowski

Chapter 4. Extending the Conceptualization of Performability with Cultural Sustainability: The Case of Social Robotics

A more comprehensive conceptualization of performability, beyond pure economic, technological, and environmental performance, is needed. Adopting and using a technological innovation in its socio-cultural context is likely to have performative impacts well beyond techno-economic and environmental conditions. Examples, as discussed in this chapter, include changes of human and social behavior conditions following from the adoption of social robotics. Reviewing recent developments in social robotics and the adoption of this technology in professional activities, this chapter argues that contemporary conceptualization of performability is incapable of capturing all important conditions and therefore needs to be extended to include cultural sustainability. Borrowing from theory on technology and innovation development, impact, responsibility, and living labs allows us to lay some preliminary stepping stones toward an extended conceptualization of performability and how such technology can be tested in the right context. Before closing, the chapter briefly sketches out avenues for future research.

John P. Ulhøi, Sladjana Nørskov

Chapter 5. Design for Performability Under Arctic Complex Operational Conditions

Complex operational conditions such as those in the Arctic regions can affect the performability and its integrated elements in various ways. Historical performability data such as failure and repair data play important roles in performability assessment. Such data should reflect the real conditions that equipment and human experience during operations. However, in practice, in some applications, there are not many efforts for collecting, reporting, and analyzing the performability data together with all associated influencing factors, which are the parameters of the complex operational conditions affecting the performability of a system. A case in point is the Arctic offshore, where compared to normal-climate regions, the performability data and associated influencing parameters (e.g. environmental conditions) are scarce. Hence, operations in such a complex environment are associated with a great deal of uncertainties. Such uncertainties can lead to unforeseen failures or in some cases to expensive over-designed concepts. One of the main reasons for lack of performability data is that most of available databases are not prepared originally for performability analysis of systems in complex operational conditions. For example, OREDA database, which is a database for failure and repair data of different components of oil and gas facilities in the Norwegian Continental Shelf, focuses only on reliability and maintainability that are two pillars of performability concept and thus the required data for other performability elements, including quality, safety, and sustainability, are not addressed accordingly. This chapter discusses the effects of complex operational conditions of the Arctic on the performability of offshore facilities. It also discusses the challenges of available methods for performability data collection, and thereafter, it introduces a methodology based on expert judgments for performability assessment of systems operating in the Arctic.

Abbas Barabadi, Masoud Naseri

Chapter 6. Dynamic Multi-state System Performability Concepts, Measures, Lz-Transform Evaluation Method

In the chapter a performability concept for dynamic multi-state system as an extension of multi-state system reliability is considered. Steady-state and instantaneous (dynamic) indices for performability estimation in real-world multi-state systems are presented. The main obstacle in assessment of these indices is a “curse of dimensionality”—a huge number of system’s states even for relatively simple multi-state system. In order to overcome on these difficulties, in this chapter modern mathematical method is considered—Lz-transform—for evaluation of dynamic performability indices (measures) for multi-state systems. Numerical example is presented in order to illustrate the approach.

Anatoly Lisnianski, Lina Teper

Chapter 7. On Modeling and Performability Evaluation of Time Varying Communication Networks

Time varying communication networks (TVCNs) are networks in which attributes such as topology and mobility vary with time. In these networks, the concepts used to evaluate the performance of conventional networks, viz., minimal path/cut set and spanning tree/arborescence, are inapplicable in their original form. Consequently, these concepts need to be redefined and extended to address dynamically changing topology as well as to take into account the effects of time ordering on causality. In this regard, this chapter first discusses some models developed to represent various features of TVCNs and then reviews recently developed techniques for analyzing performability of TVCNs. Next, it extends the notion of spanning arborescences to two types of timestamped spanning arborescences, viz., timestamped valid spanning arborescences and timestamped invalid spanning arborescences, of TVCNs for network convergecasting. More specifically, a timestamped spanning arborescence is a spanning arborescence in which each constituting edge accompanies a contact, representing its active time. Thus, a timestamped valid spanning arborescence, aka time-ordered spanning arborescence, is a timestamped spanning arborescence in which traversal over edges is possible as we only move forward in time, that is, each edge is time-ordered. Otherwise, timestamped spanning arborescence is a timestamped invalid spanning arborescence. Later, the chapter presents an approach which first generates all timestamped spanning arborescences and then uses them to enumerate all time-ordered spanning arborescences for convergecasting in predictable TVCNs. The chapter also shows an application of the generated timestamped spanning arborescences in enumeration of all time-ordered minimal path sets. At last, it discusses on how all time-ordered spanning arborescences and minimal path sets can be utilized for assessing reliability of TVCNs.

Sanjay K. Chaturvedi, Sieteng Soh, Gaurav Khanna

Chapter 8. Characteristics and Key Aspects of Complex Systems in Multistage Interconnection Networks

Multistage Interconnection Networks (MINs) have been used extensively to provide reliable and fast communication with effective cost. In this paper, four types of systems, characteristics and key aspects of complex systems, are discussed in the context of MINs. Shuffle-Exchange Networks (SEN), a common network topology in MINs, is analysed as a complex system. Different perspectives on how MINs possess all characteristics of complex systems are discussed and therefore it is managed as complex systems accordingly.

Indra Gunawan

Chapter 9. Evaluation and Design of Performable Distributed Systems

Performability measures system performance including quality, reliability, maintainability and availability over time, regardless of faults. This is challenging for distributed systems, since the internet was designed as a best-effort network that does not guarantee that data delivery meets a certain level of quality of service. In this chapter, we explain the design, test and performability evaluation of distributed systems by utilizing adversarial components. In our approach, the system design uses adversarial logic to make the system robust. In system test, we can leverage existing, powerful attacks to verify our design by using existing denial of service (DoS) attacks to stress the system.

Naazira B. Bhat, Dulip Madurasinghe, Ilker Ozcelik, Richard R. Brooks, Ganesh Kumar Venayagamoorthy, Anthony Skjellum

Chapter 10. Network Invariants and Their Use in Performability Analysis

Network-type systems with binary components have important structural parameters known in literature as Signature, Internal Distribution, D-spectra and BIM-spectra. The knowledge of these parameters allows obtaining the probabilistic description of network behaviour in the process of their component failures, and probabilistic description of such network parameters as resilience, component importance, system failure probability as a function of component failure probability q, and the approximation to reliability if q tends to 0. When the network has many components, the exact calculation of Signatures or D-spectra becomes a very complicated issue. We suggest using efficient Monte Carlo procedures. All relevant calculations are illustrated by examples of networks, including flow in random networks and network structural comparison in the process of network gradual destruction process.

Ilya Gertsbakh, Yoseph Shpungin

Chapter 11. The Circular Industrial Economy of the Anthropocene and Its Benefits to Society

Circular economy has always been about maintaining the value of stocks, be it natural, human, cultural, financial or manufactured capital, with a long-term perspective. A circular economy has evolved through three distinct phases, which today co-exist in parallel: a bioeconomy of natural materials ruled by Nature’s circularity, an anthropogenic phase (Anthropocene: to define a new geological epoch, a signal must be found that occurs globally and will be incorporated into deposits in the future geological record. The 35 scientists on the Working Group on the Anthropocene (WGA) decided at the beginning of 2020 that the Anthropocene started with the nuclear bomb in Hiroshima on 6 August 1945. The radioactive elements from nuclear bomb tests, which were blown into the stratosphere before settling down to Earth, provided this ‘golden spike’ signal. https://quaternary.stratigraphy.org/working-groups/anthropocene/ ) characterised by synthetic (man-made) materials and objects and a phase of ‘invisible’ resources and immaterial constraints. This chapter will focus on how the anthropogenic phase and the ‘invisible’ resources and immaterial constraints can integrate into a mature circular industrial economy.

Walter R. Stahel

Chapter 12. Sustainment Strategies for System Performance Enhancement

“Sustainment” (as commonly defined by industry and government) is comprised of maintenance, support, and upgrade practices that maintain or improve the performance of a system and maximize the availability of goods and services while minimizing their cost and footprint or, more simply, the capacity of a system to endure. System sustainment is a multitrillion-dollar enterprise, in government (infrastructure and defense) and industry (transportation, industrial controls, data centers, and others). Systems associated with human safety, the delivery of critical services, important humanitarian, and military missions and global economic stability are often compromised by the failure to develop, resource, and implement effective long-term sustainment strategies. System sustainment is, unfortunately, an area that has traditionally been dominated by transactional processes with little strategic planning, policy, or methodological support. This chapter discusses the definition of sustainment and the relationship of sustainment to system resilience, the economics of sustainment (i.e., making business cases to strategically sustain systems), policies that impact the ability to sustain systems, and the emergence of outcome-based contracting for system sustainment.

Peter Sandborn, William Lucyshyn

Chapter 13. Four Fundamental Factors for Increasing the Host Country Attractiveness of Foreign Direct Investment: An Empirical Study of India

Protectionist policies and recent coronavirus outbreak have made it more difficult for host countries to attract Foreign Direct Investment (FDI) and require governments to enhance their country’s attractiveness for adapting to this changing environment. In this respect, this study introduces four fundamental factors that improve the inflow of FDI by comparing them with conventional elements that are commonly considered as being positive for such inflows. Unlike traditional factors that particularly stress what resources the host countries must possess in order to attract FDI, the fundamental factors suggested by this study emphasize more the how aspects, the effective way to utilize and mobilize a country’s available resources. Furthermore, in order to understand better the importance of these factors, it uses India as an illustrative example. The Modi government introduced its “Make in India” policy to enhance its manufacturing sector by attracting FDI, yet such inflows to the manufacturing industries have remained very low. Thus, India requires more systemic measures for improving its business environment. By comparing its FDI attractiveness based on the four factors against nine other Asian economies, this study identifies strengths and weaknesses of India. It then suggests a series of strategic guidelines for enhancing India’s FDI attractiveness.

Hwy-Chang Moon, Wenyan Yin

Chapter 14. Structured Approach to Build-in Design Robustness to Improve Product Reliability

Robustness is defined as the degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions. The objective of robustness is to deliver high reliability to customers. Robustness ensures that product design is immune to and can gracefully handle invalid inputs and stressful environmental conditions without any disruption or degradation of service to the end user. Robustness can be systematically built into any system, regardless of hardware or software, by following an end−to−end approach that encompasses product requirements, design, development, and testing. This chapter provides a structured approach to design in robustness by mapping baseline use case scenarios as ‘sunny day’ scenarios, identifying potential failures using P−diagrams and Design Failure Modes & Effects Analysis (or, “rainy day” scenarios), and proactively embedding design controls to strengthen product robustness and minimize field failures. The authors describe an innovative way to prioritize design improvements not just by traditional Risk Priority Number (RPN) of design failures but by considering the actual magnitude of risk reduction, as well as by factoring in cost of design improvements in prioritization decisions. Robustness once built−in product design must be validated through vigorous robustness testing to provide objective evidence of design robustness and support decision-making regarding product readiness for release. A comprehensive approach to robustness testing is described along with guidance on how to design a comprehensive suite of robust test cases for high reliability.

Vic Nanda, Eric Maass

Chapter 15. Time Series Modelling of Non-stationary Vibration Signals for Gearbox Fault Diagnosis

Gearboxes often operate under variable operating conditions, which lead to non-stationary vibration. Vibration signal analysis is a widely used condition monitoring technique. Time series model-based methods have been developed for the study of non-stationary vibration signals, and subsequently, for fault diagnosis of gearboxes under variable operating conditions. This chapter presented the latest methodologies for gearbox fault diagnosis using time series model-based methods. The main contents include widely used time-variant models, parameter estimation and model structure selection methods, model validation criteria, and fault diagnostic schemes based on either model residual signals or model parameters. Illustrative examples are provided to show the applications of model residual-based fault diagnosis methods on an experimental dataset collected from a laboratory gearbox test rig. Future research topics are pointed out at the end.

Yuejian Chen, Xihui Liang, Ming J. Zuo

Chapter 16. Risk-Informed Design Verification and Validation Planning Methods for Optimal Product Reliability Improvement

This chapter proposes four types of mathematical optimization modeling approaches to optimize the product design Verification and Validation (V&V) planning during the New Product Development (NPD) process. These four optimization models provide four risk mitigation strategies from perspectives of cost efficiency to the optimal selection of a set of V&V activities for maximizing the overall system reliability improvement. The proposed approaches not only incorporate the critical product development constraints in V&V planning, such as the cost, time, reliability improvement, and sequencing and effectiveness of V&V activities, but also consider the decay of the improvement effectiveness when tackling the V&V activities’ selecting and sequencing challenges. In addition, the concepts of set covering, set partition, and set packing are applied to assure that different levels of critical failure modes can be covered in different ways according to different risk mitigation requirements by the end of V&V execution. The application of the proposed optimization models and comparisons with existing methods for product V&V planning are illustrated through the product development of a power generation system within a diesel engine.

Zhaojun Steven Li, Gongyu Wu

Chapter 17. Efficient Use of Meta-Models for Reliability-Based Design Optimization of Systems Under Stochastic Excitations and Stochastic Deterioration

The main difficulty in the application of reliability-based design optimization (RBDO) to time-dependent systems is the continual interplay between calculating time-variant reliability (to ensure reliability policies are met) and moving the design point to minimize some objective function, such as cost, weight or size. In many cases, the reliability can be obtained readily using, for example, first-order reliability methods (FORM). However, this option is not available when certain stochastic processes are invoked to model, for example, gradual damage or deterioration. In this case, inefficient Monte Carlo simulation (MCS) must be used. The work herein provides a novel way to obviate this inefficiency. First, a meta-model is built to relate the system cumulative distribution function of time to failure (cdf) to the design space. A design of experiments paradigm helps determine a few training sets and then the mechanistic model and the uncertain characteristics of the variables, with MCS, help produce the corresponding cdf curves. The meta-model (using matrix methods) directly links an arbitrary sample from the design space to its cdf. The optimization process accesses the meta-model to quickly evaluate both objectives and failure constraints. A case study uses a electromechanical servo system. The meta-model approach is compared to the traditional MCS approach and found to be simple, accurate and very fast, suggesting an attractive means for RBDO of time-dependent systems under stochastic excitations and stochastic deterioration.

Gordon J. Savage, Young Kap Son

Chapter 18. Dynamic Asset Performance Management

Managing asset performance under prevailing dynamic business and industrial scenario is becoming critical and complex, due to technological advancements and changes like artificial intelligence, Industry 4.0, and advanced condition monitoring tools with predictive and prescriptive analytics. Under the dynamic asset management landscape, asset performance is an integral part of an industrial process to ensure performance assurance and acts as a key game changer. Therefore, managing the asset performance and data analytics throughout the asset life cycle is critical and complex for the long-term industrial and business viability, as it involves multiple stakeholders with dynamic inputs and outputs with conflicting expectations. Lack of linkage and integration between various stakeholders along the hierarchical levels of an organization with their changing requirements is still a major issue for industries. For integration within an organization, each asset needs predictive and prescriptive analytics, besides it needs to be linked and integrated for achieving the business goals. In this chapter, managing the various issues and challenges to dynamic asset performance is discussed.

Aditya Parida, Christer Stenström

Chapter 19. Asset Management Journey for Realising Value from Assets

Assets in line with ISO55000 standard for asset management are items, things and entities which have value or potential value to the organisation. Asset management is for what we do with those assets. The journey begins with understanding the needs of the organisation in line with business objectives to deliver goods and services in a reliable, safe, timely and cost-effective manner. Realising value from assets is a holistic approach addressing complexities of expectations of stakeholder and providing competitive advantage to the business. It starts from the concept of the asset and continues to the design, manufacturing/construction, operations, maintenance and disposal of the asset known as asset life cycle. Focus is on reduced risks, enhanced performance including safety of the operation, environment and the wider communities and achieving reduced Life Cycle Costs. Systematic approach in asset management helps in improving reliability, availability, maintainability, safety and security. Leadership, good organisation culture, alignment with other systems and assurance that assets will perform when needed contributes significantly to the success of any organisation. This chapter covers how to balance cost, risk and performance in informed decision-making for maintaining value of and realising value from assets.

Chapter 20. Reliability-Based Performance Evaluation of Nonlinear Dynamic Systems Excited in Time Domain

Achintya Haldar, Francisco J. Villegas-Mercado

Chapter 21. Probabilistic Physics-of-Failure Approach in Reliability Engineering

This chapter describes an overview of the probabilistic physics-of-failure for applications to reliability engineering problems. As reliability engineering experts face situations where system and component reliability failure data are lacking or with the poor quality, a powerful modeling approach is to relay on the underlying processes and phenomena that lead to failures. Originally derived from chemistry, mechanics, and metallurgy, the processes that lead to failures are called failure mechanisms that include phenomena such as fatigue, creep, and corrosion. Physics-of-failure is an empirically based mathematical and analytical approach to modeling these underlying processes of failures. Due to limitations of information and test data available for the understanding of these processes, the PoF-based reliability should include formal accounting of the uncertainties. The physics-of-failure methods in reliability engineering that consider uncertainties lead us to the probabilistic physics-of-failure. This chapter covers some important analytical and practical aspects of the probabilistic physics-of-failure modeling, including some examples.

Chapter 22. Reliability and Availability Analysis in Practice

Reliability and availability are key attributes of technical systems. Methods of quantifying these attributes are thus essential during all phases of system lifecycle. Data (measurement)-driven methods are suitable for components or subsystems but, for the system as a whole, model-driven methods are more desirable. Simulative solution or analytic–numeric solution of the models are two major alternatives for the model-driven approach. In this chapter, we explore model-driven methods with analytic–numeric solution. Non-state-space, state-space, hierarchical, and fixed-point iterative methods are explored using real-world examples. Challenges faced by such modeling endeavors and potential solutions are described. Software package SHARPE is used for such modeling exercises.

Kishor Trivedi, Andrea Bobbio

Chapter 23. WIB (Which-Is-Better) Problems in Maintenance Reliability Policies

There have been many studies of maintenance policies in reliability theory, so that we have to select better policies that are suitable for objective systems in actual fields, such as age replacement, periodic replacement, replacement first and last, replacement overtime, and standby or parallel system, appeared in research areas. This chapter compares systematically maintenance policies and shows how to select one from the point of cost theoretically. The expected cost rates of maintenance policies and optimal solutions to minimize them are given, and their optimal policies such as replacement time $$T^{*}$$ , number $$N^{*}$$ of working cycle, and number $$K^{*}$$ of failures are obtained. Furthermore, we discuss comparisons of optimal policies to show which is better analytical and numerically. These techniques and tools used in this chapter would be useful for reliability engineers who are worried about how to adopt better maintenance policies.

Satoshi Mizutani, Xufeng Zhao, Toshio Nakagawa

Chapter 24. A Simple and Accurate Approximation to Renewal Function of Gamma Distribution

Renewal function (RF) of a life distribution has many applications, including reliability and maintenance-related decision optimizations. Such optimization problems need a simple and accurate approximation of RF so as to facilitate the solving process. Several such approximations have been developed for the Weibull distribution, but it seems that no such approximation is available for the gamma distribution. This may result from the fact that the convolutions of the gamma distribution are known so that its RF can be evaluated by a partial sum of gamma distribution series. However, the partial sum usually involves a number of terms and hence is not simple. Thus, a simple and accurate RF approximation for the gamma distribution is still desired. This chapter proposes such an approximation. The proposed approximation uses a weight function to smoothly link two known asymptotical relations. The parameters of the weight function are given in the form of empirical functions of the shape parameter. The maximum relative error of the proposed approximation is smaller than 1% for most of typical range of the shape parameter. The approximation is particularly useful for solving optimization problems that need to iteratively evaluate a gamma RF.

R. Jiang

Chapter 25. Transformative Maintenance Technologies and Business Solutions for the Railway Assets

In the past, railway systems were overdesigned and underutilized making the need for effective, coordinated, and optimized maintenance planning non-existence. With passing years, these assets are getting old and at the same time, their utilization has increased manifold mainly due to societal consciousness about climate and cost. With steeply increasing utilization of railway systems, the major challenge is to find the time slot to perform maintenance on the infrastructure and rolling stocks to maintain its functionality and ensure safe train operation. This has led the sector to look for new and emerging technologies that will facilitate effective and efficient railway maintenance and ensure reliable, punctual, and safe train operation. This chapter presents the current status and the state-of-the-art of maintenance in railway sector transformative maintenance technologies and business solutions for the railway assets. It discusses the digital transformation of railway maintenance, application of artificial intelligence (AI), machine learning, big data analytics, digital twins, robots, and drones as part of the digital railway maintenance solutions. The chapter presents a conceptual road map for developing transformative maintenance solutions for railway using new and enabling technologies which are founded on data-driven decisions.

Uday Kumar, Diego Galar

Chapter 26. AI-Supported Image Analysis for the Inspection of Railway Infrastructure

The focus in this chapter is on the use of object detection and image segmentation for railway maintenance using complex, real-world image-based data. Image-based data offer the ability to collect data across large spatial areas in a user-friendly manner, as stakeholders often have a basic and intuitive understanding of image-based content. By using already existing videos shot from track measurement vehicles traversing the railway network, it was possible to inspect the reindeer fence lining the railway in northern Sweden. The chapter suggests a framework for the costs and benefits of this type of analysis and adds some other possible applications of the image analysis of these videos.

Joel Forsmoo, Peder Lundkvist, Birre Nyström, Peter Rosendahl

Chapter 27. User Personalized Performance Improvements of Compute Devices

Over the last decade, personalization has been used widely in products and services such as web search, product recommendations, education, and medicine. However, the use of personalization for managing performance on personal computer devices such as notebooks, tablets, and workstations is rare. Performance optimization on computing devices includes all aspects of the device capability such as power and battery management, application performance, audio and video, network management, and system updates. In each case, personalization first involves learning how the user uses the system along with the context of that experience. This is followed by tuning the hardware and software settings to improve that experience by providing individualized performance gains. This chapter discusses the need, complexities, and methods used for performance personalization. A method and case study of improving application performance using utilization data and a Deep Neural Network is presented.

Nikhil Vichare

Chapter 28. The Neglected Pillar of Science: Risk and Uncertainty Analysis

Science, in general, is built on two pillars: on the one hand, confidence, obtained through research and development, analysis, argumentation, testing, data and information, and on the other humbleness, acknowledging that the knowledge—the justified beliefs—generated can be more or less strong and even erroneous. The main thesis of the present conceptual work is that the latter pillar—humbleness—has not been given the scientific attention it deserves. This pillar is founded on risk and uncertainty analysis, but the fields of this type of analysis are weak, lacking authority. The volume of research on risk and uncertainty analysis is small and the quality of current approaches and methods is not satisfactory. A strengthening of the fields of risk and uncertainty analysis is urgently and strongly needed. Several suggestions for how to meet these challenges are presented, including measures to stimulate further research on the fundamentals of these fields—and crossing established study borders, and initiatives to be taken by relevant societies to increase the awareness of the issue and deriving suitable strategies for how to develop risk and uncertainty analysis as a distinct science.

Terje Aven

Chapter 29. Simplified Analysis of Incomplete Data on Risk

A framework for simplified analysis of incomplete data on risk is presented and illustrated in five feasibility case studies on road traffic and occupational safety. The Bayesian theorem is utilized for both structuring cases of incomplete input data and providing options for dealing with incompleteness. The application of the framework requires the availability of an interval scale of an index of prevention in a situation exposed to failure. A key parameter of the framework is bounded in the range from 0 to 1 and represents the average degree of prevention (v) in failure exposure situations for a given type of risk. The Bayesian structure of the framework allows to verify an expert judgement for v by a quantitative evaluation of failure events only meaning a quantitative evaluation of the variety of failure exposure situations would not be necessary. Moreover, non-trivial comparisons between different types of risks are possible. The loss of accuracy identified from the case studies is assessed as a not satisfactory result. It is an open issue whether such inaccuracy is inherent, when applying a common and simple model for data analysis addressing various risk environments, or it can be reduced by a refined version of a common model or by improved scaling.

Bernhard Reer

Chapter 30. Combining Domain-Independent Methods and Domain-Specific Knowledge to Achieve Effective Risk and Uncertainty Reduction

The common domain-specific approach to reliability improvement and risk reduction created the false perception that effective risk reduction can be successfully delivered solely by using methods offered by the specific domain. In standard textbooks on mechanical engineering and design of machine components, for example, there is no mention of general methods for improving reliability and reducing the risk of failure of engineering products. Accordingly, the chapter demonstrates the benefits from combining domain-independent methods and domain-specific knowledge for achieving effective risk and uncertainty reduction. In this respect, the chapter focuses on the domain-independent methods for reducing risk based on segmentation and algebraic inequalities and demonstrates that combining these methods with domain-specific knowledge helps to identify new simple and effective solutions in such mature fields like strength of components, kinematic analysis of mechanisms and electrical engineering. The meaningful interpretation of algebraic inequalities led to the discovery of new physical properties of electrical circuits and mechanical assemblies. These properties have never been suggested in standard textbooks and research literature covering the mature fields of electrical and mechanical engineering which demonstrates that the lack of knowledge of domain-independent methods for reducing risk and uncertainty made these properties invisible to domain experts.

Michael Todinov

Chapter 31. Stochastic Effort Optimization Analysis for OSS Projects

It is very important to produce and maintain a reliable system structured from several open-source software, because many open-source software (OSS) have been introduced in various software systems. As for the OSS development paradigm, the bug tracking systems have been used for software quality management in many OSS projects. It will be helpful for OSS project managers to assess the reliability and effort management of OSS, if many fault data recorded on the bug tracking system are analyzed for software quality improvement. In this chapter, we focus on a method of stochastic effort optimization analysis for OSS projects by using the OSS fault big data. Then, we discuss the method of effort estimation based on stochastic differential equation and jump-diffusion process. In particular, we use the OSS development effort data obtained from fault big data. Then, deep learning is used for the parameter estimation of jump-diffusion process model. Also, we discuss the optimal maintenance problem based on our methods. Moreover, several numerical examples of the proposed methods are shown by using the effort data in the actual OSS projects. Furthermore, we discuss the results of numerical examples based on our methods of effort optimization.

Chapter 32. Should Software Testing Continue After Release of a Software: A New Perspective

Software reliability is a highly active and thriving field of research. In the past, various software reliability growth models have been suggested to analyze the reliability and security of the software systems. The present chapter seeks to focus on analyzing the software release policy under different modeling frameworks. This study discusses both the conventional policy where testing stop time and software release times coincide and the modern release time policy wherein software time-to-market and testing termination time are treated as two distinct time-points. The modern release policy represents the situation in which the software developers release the software early to capture maximum market share and continue the testing process for an added period to maximize the reliability of the software product. In addition, the concept of change-point with two different schemes is addressed in the present study. Change-point indicates the time-point at which the model parameters experience a discontinuity in time. In one scenario, the change-point is considered occurring before the release of the software and in the second scenario, the release time is treated as a change-point for the tester’s fault detection process. The study further provides numerical illustrations to test the different release time policies and analyze the practical applicability of the optimization problem to minimize the cost function and maximize the reliability attribute.

P. K. Kapur, Saurabh Panwar, Vivek Kumar

Chapter 33. Data Resilience Under Co-residence Attacks in Cloud Environment

The virtualization technology, particularly virtual machines (VMs) used in cloud computing systems have raised unique security and reliability risks for cloud users. This chapter focuses on the resilience to one of such risks, co-residence attacks where a user’s information in one VM can be accessed/stolen or corrupted through side channels by a malicious attacker’s VM co-residing on the same physical server. Both users’ and attackers’ VMs are distributed among cloud servers at random. We consider different users’ data protection policies with the aim to make the data resilient to the co-residence attacks, including data partition with and without replication of the parts, and attack detection through the early warning mechanism. Probabilistic models are suggested to derive the overall probabilities of an attacker’s success in data theft and data corruption. Based on the suggested probabilistic evaluation models, optimization problems of obtaining the data partition/replication policy to balance data security, data reliability, and a user’s overheads are formulated and solved, leading to the optimal data protection policy to achieve data resilience. The possible user’s uncertainty about the number of attacker’s VMs is taken into account. Numerical examples demonstrating the influence of different constraints on the optimal policy are presented.

Gregory Levitin, Liudong Xing

Chapter 34. Climate Change Causes and Amplification Effects with a Focus on Urban Heat Islands

Global Warming has man-made root causes and amplification effects. As reliability engineers, we know that without understanding real root causes, you may not be addressing the main part of a problem. This will be a key issue in this chapter as the global warming emphasis has been on CO2 reduction. Therefore, our focus in this chapter will be to look at a key root cause that is not currently being addressed enough often termed Urban Heat Islands (UHI) effect. This is the heat created from cities and their area coverage. We will focus on this primarily because at the present time, the International Panel on Climate Change (IPCC), the world’s governing body on the subject, is not providing any guidance on “albedo” goals similar to the way they have made suggestions for CO2 reduction. This is important as most countries look to the IPCC for guidance and it is the author’s opinion that UHI do provide a reasonable contribution to global warming, as well as it is known that they cause health-related problems from their excess heat.

Alec Feinberg

Chapter 35. On the Interplay Between Ecology and Reliability

This chapter attempts to enhance the interplay between the ecology and reliability fields by employing Boolean-based reliability language and techniques to quantify ecological metrics related to connectivity and redundancy. We emphasize the question of connectivity in models of probabilistic networks as a common area of interest for both fields. The chapter borrows techniques from mainstream reliability theory to treat a prominent problem of ecology, namely that of survivability (of a species), defined here as the probability of successful migration of a certain organism escaping from critical source habitat patches and seeking refuge in specific destination habitat patches via heterogeneous deletable ecological corridors, possibly with uninhabitable stepping stones en route. This problem might be reformulated in contexts other than that of migration, including those of (a) dynamics of metapopulations, colonization, or invasion, (b) gene flow, (c) spread of infectious diseases, epidemics, or pandemics, and (d) energy transfer within food webs. Indicators of network connectivity in classical reliability theory are probabilities that might be designated according to the set of source nodes and the set of destination nodes as one to one, one to many, many to many, or all to all. Our present notion of survivability (of a species) is also a probability of connectivity, now measured from any node (among many nodes) to any node (among many nodes). We explore methods for computing the survivability (of a species) by adapting switching-algebraic techniques that are usually employed in the reliability field. In addition to this survivability metric, we comment on some other connectivity indicators that are currently used in ecology. We stress two recent contributions to the ecology literature, one employing analogy with electric circuit theory, and another concerning the most reliable (or minimum-lag) dispersal paths.

Ali Muhammad Ali Rushdi, Ahmad Kamal Hassan
Weitere Informationen