Skip to main content
main-content
Top

About this book

Model Validation and Uncertainty Quantification, Volume 3: Proceedings of the 36th IMAC, A Conference and Exposition on Structural Dynamics, 2018, the third volume of nine from the Conference brings together contributions to this important area of research and engineering. The collection presents early findings and case studies on fundamental and applied aspects of Model Validation and Uncertainty Quantification, including papers on:

Uncertainty Quantification in Material Models

Uncertainty Propagation in Structural Dynamics

Practical Applications of MVUQ

Advances in Model Validation & Uncertainty Quantification: Model Updating

Model Validation & Uncertainty Quantification: Industrial Applications

Controlling Uncertainty

Uncertainty in Early Stage Design

Modeling of Musical Instruments

Overview of Model Validation and Uncertainty

Table of Contents

Frontmatter

Chapter 1. Sparse Deconvolution for the Inverse Problem of Multiple-Impact Force Identification

The traditional regularization methods for impact force identification such as Tikhonov regularization and truncated singular value decomposition are to minimize the l2-norm of the desired force, commonly leading to a low accurate solution. In this paper, considering the inherent sparse nature of multiple impact forces, the idea of sparse deconvolution in signal/image processing is introduced to solve the ill-posed inverse problem of impact force identification. The primal-dual interior point method is applied to solve the convex optimization problem of the impact force deconvolution, where minimizing the l2-norm is replaced by minimizing the l1-norm. Experiments of two-input-two-output system is conducted on a shell structure to illustrate the advantage of the sparse deconvolution method. Due to the sparse regularization term, the elements of the sparse solution are nearly zeros in the unloading stage of impact force, where the small noise from the observed response is greatly inhibited. Compared with the traditional Tikhonov regularization method, the proposed sparse deconvolution method greatly improves the identification accuracy of the multiple-impact force.

Baijie Qiao, Zhu Mao, Jinxin Liu, Xuefeng Chen

Chapter 2. Validation of Container System Component Models for Drops

The validation of computational models that are used for simulating drops, shocks and other severely nonlinear environments has very different challenges to the validation of lower level vibration models. This paper details the tactics used for validating computational models for drop environments as applied to components of a container system, including the use of multi-code approaches and the use of testing and the techniques used to compare the data from the various sources, giving examples.When validating a model for a highly nonlinear environment, such as a drop, it is important to consider the question that the model is to answer. If the question is about the damage that occurs in terms of plastic deformation or fracture, a qualitative approach will be taken to validation, whereas if the question is about something measurable, such as acceleration or strain, the approach to validation will be quantitative.

Thomas M. Hall, Philip R. Ind, Thomas J. Anthistle

Chapter 3. Validation of Container System Finite Element Models for IAEA Compliance

The International Atomic Energy Agency (IAEA) place requirements on container systems used to transport nuclear materials. To demonstrate compliance with these requirements it is necessary to use validated models alongside full system qualification tests. This paper discusses the approach currently being taken to demonstrate regulatory compliance for a container system. In this example the container must be subjected to worst case combination of a 9 m drop, 1 m spigot intrusion test and a 30 min Liquid Fuel Fire (LFF). To aid in the verification and validation of the Finite Element Models two independent teams are conducting the modelling in two separate FE codes in addition to using modal, sub-system and full system destructive tests. Ultimately it is intended that a capability will exist to allow the full sequence (including the LFF) to be modelled and simulated, this will allow for multiple different scenarios and damage levels to be considered.

Philip R. Ind, Thomas M. Hall, Thomas J. Anthistle, Steve Nicholls

Chapter 4. Nonlinear Squeezing Wavelet Transform for Rotor Rub-Impact Fault Detection

Classical time-frequency analysis methods can depict time-frequency structure of non-stationary signals. However, they may have a shortage in extracting the weak components with small amplitude hidden in complex signals, because their time-frequency representation coefficients are proportional to the amplitude or energy of a signal. In this paper, we present a new data analysis method, called nonlinear squeezing wavelet transform, to extract the weak feature of highly oscillating frequency modulation for rotor rub-impact fault. The time-frequency representation of the proposed method is independent of the signal amplitude and only relevant to the signal phase, thus it can be used to characterize the time-frequency pattern of non-stationary multi-component signals, especially for weak components detection. The experiments on simulated signals verify the effectiveness of the proposed method in weak signal detection. Finally, the validity of this method is demonstrated to extracting the feature on a real rotor system with weak rub-impact fault.

Chaowei Tong, Xuefeng Chen, Shibin Wang

Chapter 5. Experimental Credibility and Its Role in Model Validation and Decision Making

Experiments are a critical part of the model validation process, and the credibility of the resulting simulations are themselves dependent on the credibility of the experiments. The impact of experimental credibility on model validation occurs at several points through the model validation and uncertainty quantification (MVUQ) process. Many aspects of experiments involved in the development and verification and validation (V&V) of computational simulations will impact the overall simulation credibility. In this document, we define experimental credibility in the context of model validation and decision making. We summarize possible elements for evaluating experimental credibility, sometimes drawing from existing and preliminary frameworks developed for evaluation of computational simulation credibility. The proposed framework is an expert elicitation tool for planning, assessing, and communicating the completeness and correctness of an experiment (“test”) in the context of its intended use—validation. The goals of the assessment are (1) to encourage early communication and planning between the experimentalist, computational analyst, and customer, and (2) the communication of experimental credibility. This assessment tool could also be used to decide between potential existing data sets to be used for validation. The evidence and story of experimental credibility will support the communication of overall simulation credibility.

Sarah L. Kieweg, Walt R. Witkowski

Chapter 6. An Experimental Case Study for Nonlinear Model Validation: Effect of Nonlinearities in an Aero-Engine Structure

Linear FE-models are commonly validated with measured data obtained from experimental test conducted under similar FE-simulated boundary conditions. However, measured data at higher or operational amplitudes of vibration often exhibit evidence of nonlinear characteristics. Research has proven that majority of the causes and sources of these nonlinearities are frequently local in nature while a large proportion of the structure can be represented using linear theory. This paper presents the experimental investigations conducted on an aircraft structure ranging from linear to nonlinear regime, the aim of the investigation was to understand the influence of connecting accessories or components to the proposed aircraft structure. Broadband, sine-sweeps and stepped-sine excitations were used to detect and characterise the nature of the nonlinear behaviour in the assembly.

Samson B. Cooper, Dario DiMaio, Ibrahim A. Sever, Sophoclis Patsias

Chapter 7. Finite Element Model Updating of a Connecting Structure Based on Strain

In terms of the fact that model simplification and equivalence is inevitable during the modeling process. In this paper, model updating of a connecting structure in the static condition was conducted. The process to obtain theoretical static responses was briefly introduced, and the correlation of static response was defined. And then, the fundamental of model updating and thin-layer elements were described. In order to verify the effectiveness of the method, taking a connecting structure as the object, a series of activities including structure modeling, model updating and response prediction were conducted. The connecting structure was divided into several substructures, each of them were meshed with solid elements. The interfaces between substructures were represented by thin-layer elements. And the boundary conditions were represented by bush elements. After sensitivity analysis, parameters to be calibrated were selected. Based on the experimental tests of multi load cases, the model updating of the connecting structure was accomplished. The result shows that, the model after updating can reproduce responses that used in the process of model updating. In addition, the responses that are not used in model updating are predicted precisely.

Ming Zhan, Qintao Guo, Lin Yue, Baoqiang Zhang

Chapter 8. Nonlinearities of an Aircraft Piccolo Tube: Identification and Modeling

Piccolo tubes are parts of aircraft wings anti-icing system and consist of titanium pipes inserted into the internal structure of the slat. Due to differential thermal expansion, clearances between the tube and its support are unavoidable and cause the overall system to exhibit highly nonlinear behavior, resulting from impacts and friction. This paper addresses the identification and modeling of the nonlinearities present in the slat-Piccolo tube connection. The complete identification procedure, from nonlinearity detection and characterization to parameter estimation, is carried out based upon sine-sweep measurements. The use of several techniques, such as the acceleration surface method, enables to understand the complex dynamics of the Piccolo tube and build a reliable model of its nonlinearities. In particular, the parameters of nonsmooth nonlinear stiffness and damping mechanisms are estimated. The nonlinear model is finally validated on standard qualification tests for airborne equipments.

T. Dossogne, P. Trillet, M. Schoukens, B. Bernay, J. P. Noël, G. Kerschen

Chapter 9. Reliability Analysis of Existing Bridge Foundations for Reuse

Reuse of bridge foundations often requires determining the capacity of in-situ driven piles and drilled shafts. These piles have a proven history of load carrying capacity, but often lack test data from which the capacity can be obtained. It is possible to estimate the pile capacity with empirical calculations based on pile geometry and soil properties, but these produce highly uncertain results due to the variable nature of soil and impacts from the installation method. As a result, the LRFD code requires lower resistance factors be used for these calculations than would be required for piles with test data. Estimation of pile capacity using empirical calculations often produces over conservative results due to these low resistance factors. Since the piles have been in service for the lifespan of the original bridge, they have proven that as a system they have more capacity than the total dead and live loads previously applied. The amount of load applied to each pile, however, is uncertain due to variability of the loading and uncertainties of how the load is distributed to the individual piles. This research proposes reliability based methodology to determine the capacity of driven piles or drilled shafts accounting for these uncertainties.

Nathan Davis, Masoud Sanayei

Chapter 10. Recent Developments in Hardware-in-the-Loop Testing

Future applications of mechatronic systems will be characterized by a high degree of digitization enabling the integration of numerous innovative functions. The validation and reliability analysis of such complex systems often requires the realization of cost intensive full system prototypes and the evaluation of field tests. Innovative technologies are therefore integrated slowly in industrial sectors that focus on system reliability. Hence, there is a strong interest in a reliability orientated development and test process for complex mechatronic systems.The integration of real-time simulations in test environments allows efficient development and verification of the individual components of a mechatronic system in many cases. Currently, this especially applies for the test-driven development of embedded control units and their corresponding software. A reduced number of field tests, the automated run of test procedures and the application of error injection methods can be achieved by the widely used Hardware-in-the-Loop (HIL) technique. In signal level HIL tests, an existing control unit is connected to a virtual real-time simulation of the residual system. If however the device under test includes a mechanical or power electrical interface, the coupling of the test object to a virtual residual system requires the application of a mechanical or power electrical HIL interface. Current activities aim for this extension of In-the-Loop technologies for the validation of mechanical and power electronic subsystems.This paper highlights the potential of combined signal level, mechanical level and power electrical HIL tests for the validation of complex mechatronic systems in an early phase of design. The paper also points out the key topics of test-driven development, real-time simulation and the realization of hybrid test environments by means of mechanical and power electrical HIL interfaces.

Jonathan Millitzer, Dirk Mayer, Christian Henke, Torben Jersch, Christoph Tamm, Jan Michael, Christopher Ranisch

Chapter 11. Assessing Structural Reliability at the Component Test Stage Using Real-Time Hybrid Substructuring

The propagation of uncertainties through complex systems is a challenging endeavor. While numerical simulations can be used to accurately predict the dynamic performance of structural systems, there are some instances where the dynamics and uncertainties of specific components may be less understood or difficult to accurately model. This paper will implement a structural reliability assessment employing the cyber-physical real-time hybrid substructuring (RTHS) method to combine a numerical model of a larger structural system, incorporating uncertainty in specific parameters, with a physical test specimen of a component of the system while fully incorporating the system-level dynamic interactions and uncertainty propagation. This RTHS approach will allow for uncertainty and reliability to be addressed in the early stage of the design process as components become available and the remainder of the system remains numerically modeled. A small-scale RTHS experiment will be used to demonstrate the probability of failure of a spring-mass-damper system with a relatively small number of component tests by employing the previously proposed Adaptive Kriging-Hybrid Simulation (AK-HS) reliability method.

Connor Ligeikis, Alex Freeman, Richard Christenson

Chapter 12. Modal Identification Using a Roving Actuator and a Fixed Sensor

Experimental modal analysis typically use an actuator to apply input force on a structure and multiple fixed sensors to record the responses of the structure. An appealing alternative to using many stationary sensors is to use a roving actuator and a single fixed sensor. In this study, a method to identify the modal properties of a beam from the input-output data obtained using a roving actuator and a fixed accelerometer is presented. In multiple test setups, the roving actuator applies input forces at different points on the beam, while the fixed sensor measures the response at a particular location of the beam during all tests. Using the accumulated input-output measurements, the modal identification of the beam is performed in two distinct steps: identification of natural frequencies and modal damping ratios, followed by an estimation of mode shape components at all excited degrees of freedom. The variability in the estimated modal parameters due to the presence of measurement noise is studied using Monte Carlo simulations.

Rajdip Nayek, Suparno Mukhopadhyay, Sriram Narasimhan

Chapter 13. Validation of Lightweight Antenna Reflector Model for Environmental Acoustic Testing Operating Conditions

Environmental testing is required in the space industry to evaluate the survivability of space hardware to the launch environment. Such hardware is designed according to high demands in terms of performance and lightweight (e.g. aiming to maximise the payload weight and increase fuel efficiency). Solar panels and antenna reflectors, typically made of carbon fiber reinforced polymers and honeycomb, are examples of sub-systems presenting large surfaces of lightweight materials which are particularly sensitive to acoustic loads. Environmental acoustic testing consists in reproducing the acoustic field of a Launch Vehicle (LV) with acoustic power distribution comparable to the operating conditions. The standard way to reproduce the acoustic loading is the so-called Reverberant Field Acoustic eXcitation (RFAX) test, which is a rather costly and time consuming test method. Therefore, at sub-system level, other dynamic tests than acoustic (especially those for model validation) are performed only if strictly necessary. An alternative to RFAX is Direct Field Acoustic eXcitation (DFAX) testing. This test method has emerged as a more cost-efficient qualification technique which, in addition, presents features to potentially improve the reproducibility of the launch environment (e.g. explicit setting of the acoustic field spatial correlation properties). In this paper, Operational Modal Analysis (OMA) is applied aiming to determine the dynamic characteristics of a parabolic-shape antenna reflector for DFAX operating conditions. This approach explores the possibility to exploit data collected during qualification tests also for modal model validation purposes. The objective of this research is the validation of the lightweight antenna reflector model by correlating numerical modal analysis results against OMA results. Modal-based correlation techniques followed by sensitivity analysis, help on error localisation and on the selection of proper model updating parameters. Then, the output of this correlation study allows updating the model, bringing the numerical modal model in better agreement with the experimental data acquired during the environmental acoustic test.

M. Alvarez Blanco, R. Hallez, A. Carrella, K. Janssens, B. Peeters

Chapter 14. Confidence in the Prediction of Unmeasured System Output Using Roll-Up Methodology

This research is concerned with how to use available experimental data from tests of lower complexity to inform the prediction regarding a complicated system where no test data is available. Typically, simpler test configurations are used to infer the unknown parameters of an engineering system. Then the calibration results are propagated through the system model to predict the uncertainty in the system response. However, it is important to note that parameter estimation results are affected by the quality of the model used to represent the test configuration. Therefore, it is necessary that the model of the test configuration be also subjected to rigorous validation testing. Then the calibration and validation results for the test configurations need to be integrated to produce the distributions of the parameters to be used in the system-level prediction. Such a systematic roll-up methodology that integrates calibration and validation results at multiple levels of test configurations has been previously established (Sankararaman and Mahadevan, Reliab Eng Syst Saf 138:194–209, 2015).The current work develops an approach to quantify the confidence in the use of lower-level test data (through the roll-up methodology) in predicting system-level response. The propagated roll-up distributions are compared against simulated output distributions from a calibrated system model based on synthetic data; this comparison is done through the model reliability metric (Rebba and Mahadevan, Reliab Eng Syst Saf 93(8):1197–1207, 2008) and results in a quantified roll-up confidence. Then an optimization procedure is formulated to maximize the roll-up confidence by selecting the most valuable calibration and validation tests at the lower levels. The proposed methods for both the forward and inverse UQ problems are applied to the multi-level Sandia dynamics challenge problem (Red-Horse and Paez, Comput Methods Appl Mech Eng 197(29–32):2578–2584, 2008).This work is funded by Sandia National Laboratories.

Kyle Neal, Chenzhao Li, Zhen Hu, Sankaran Mahadevan, Joshua Mullins, Benjamin Schroeder, Abhinav Subramanian

Chapter 15. Application of the Transfer Matrix Method for the Analysis of Lateral Vibrations of Drillstrings with Parameter Uncertainties

To compare the susceptibility of different drillstring assemblies to lateral vibrations while taking parameter uncertainties into consideration, a computationally efficient model based on the transfer matrix method (TMM) in combination with a modal reduction is proposed in this study. Changing boundary conditions along the drilling trajectory are taken into account by combining the linear dynamic analysis from the TMM model with a static solution obtained from a finite element model. The statistical evaluation of the results enables a comparison of the dynamic behavior of different drillstring configurations during an early design stage.

Ilja Gorelik, Mats Wiese, Lukas Bürger, Sebastian Tatzko, Hanno Reckmann, Andreas Hohl, Jörg Wallaschek

Chapter 16. Consolidation of Weakly Coupled Experimental System Modes

Normal modes of thin-walled axisymmetric shell structures are characterized by (1) overall body and (2) shell breathing families. By utilization of “body” deformation trial vectors, the two modal families are readily identified and separated. Local non-symmetrical structural features and/or imperfections cause body and breathing modes to mix (weak coupling) to form “fragmented” body mode clusters. Employment of singular value decomposition of experimental body mode cluster kinetic energies is found to consolidate the “fragments” back to ideally axisymmetric system body modes.

Robert N. Coppolino

Chapter 17. Fatigue Monitoring and Remaining Lifetime Prognosis Using Operational Vibration Measurements

A framework is presented for real-time monitoring of fatigue damage accumulation and prognosis of the remaining lifetime at hotspot locations of new or existing structures by combining output-only vibration measurements from a permanently installed, optimally located, sparse sensor network with the information build into high-fidelity computational mechanics models. To produce fatigue damage accumulation maps at component and/or system level, valid for the monitoring period, the framework integrates developments in (a) fatigue damage accumulation (FDA) and (b) stress time histories predictions under loading and structural modeling uncertainties based on monitoring information (Papadimitriou et al., Struct Control Health Monit 18(5):554–573, 2011). Methods and computational tools include, but are not limited to, the use of Kalman-type filters for state and stress response reconstruction based on the sensor information (Eftekhar Azam et al., Mech Syst Signal Process 60:866–886, 2015; Lourens et al., Mech Syst Signal Process 29:310–327, 2012), as well as stress cycle counting techniques, S-N curves and fatigue damage accumulation laws (Miner, Appl Mech Trans (ASME) 12(3):159–164, 1945; Palmgren, VDI-Z 68(14):339–341, 1924) to estimate fatigue from the reconstructed stress time histories at numerous hot spot locations. The FDA maps provide realistic fatigue estimates consistent with the actual operational conditions experienced by an individual structure. Combined with models of future loading events and their uncertainties, assumed or rationally estimated during the long-term monitoring period, the continuously updated FDA maps can be used to predict the remaining fatigue lifetime maps and associated uncertainties. Developments are valuable for planning cost-effective maintenance strategies, eventually reducing the life-cycle maintenance cost.

Costas Papadimitriou, Eleni N. Chatzi, Saeed Eftekhar Azam, Vasilis K. Dertimanis

Chapter 18. Feasibility of Applying Phase-Based Video Processing for Modal Identification of Concrete Gravity Dams

Hydraulic structures have been considered as one of the most essential civil infrastructures, and play a critical role in developing countries throughout the history for water storage and electricity generation. Due to the importance and the catastrophic consequences of unexpected failures in hydraulic infrastructures, monitoring and maintenance of dams should be handled very meticulously and with high precision. Among several measurement techniques as a specific modern non-contact sensing technology, optical/video information is getting more and more attention to interpret structural responses and system status awareness. By means of processing the acquired video, a full-field system information is available which may be applied later to Experimental Modal Analysis, Structural Health Monitoring (SHM), System Identification, etc. Such non-contact full-field sensing technologies avoid the installation of a gigantic number of conventional sensors in the occasions of large dimension. Within this context, the feasibility of applying Phase-Based Motion Estimation (PME) and video magnification has been studied for structural identification purposes on the concrete gravity dam subject to white noise excitations. Firstly, the PME and motion magnification algorithms are validated by the comparison of a lab-scale cantilever beam test and the numerical simulation. Next, the modal dynamic procedure in ABAQUS is carried out and the time history response of the dam is obtained. Then the simulated motion video of the dam is exported and processed using PME and magnification. The video processing results are finally compared with the results from frequency procedure in ABAQUS. The results obtained prove the concept of using PME and video magnification as a successful methodology in the modal identification of large-scale concrete gravity dams.

Qi Li, Gaohui Wang, Aral Sarrafi, Zhu Mao, Wenbo Lu

Chapter 19. Using 2D Phase-Based Motion Estimation and Video Magnification for Binary Damage Identification on a Wind Turbine Blade

Videos (sequence of images) as three-dimensional signals may be considered as a very rich source of information for several applications in structural dynamics identification and structural health monitoring (SHM) systems. Within this paper high-speed cameras are used to record the sequence of images (video) of a baseline and damaged wind turbine blade (WTB) while vibrating due to the external loadings. Among several computer vision algorithms for motion extraction from the videos, phase-based motion estimation technique is used to extract the response of both the baseline and damaged wind turbine blade. Modal parameters (natural frequencies and operating deflection shapes) were used as damage sensitive features in order to detect the occurrence of damage in the wind turbine blade. The first four natural frequencies of the both baseline and damaged wind turbine blade are extracted by analyzing the estimated motion provided by the phase based motion estimation in the frequency domain. The motion magnification algorithm is also utilized to visualize and extract the operating deflection shapes of the wind turbine blade which may be used later as an indicator of the presence of damage. It has been shown that changes in the dynamic behavior of the wind turbine blade will result to deviations in the nominal natural frequencies and operating deflection shapes, and the damaged wind turbine blade can be differentiated from the baseline WTB using this non-contact measurement approach.

Aral Sarrafi, Zhu Mao

Chapter 20. Hierarchical Bayesian Calibration and Response Prediction of a 10-Story Building Model

This paper presents Hierarchical Bayesian model updating of a 10-story building model based on the identified modal parameters. The identified modal parameters are numerically simulated using a frame model (exact model) of the considered 10-story building and then polluted with Gaussian white noise. Stiffness parameters of a simplified shear model~- representing modeling errors - are considered as the updating parameters. In the Hierarchical Bayesian framework, the stiffness parameters are assumed to follow a probability distribution (e.g., normal) and the parameters of this distribution are updated as hyperparameters. The error functions are defined as the difference between model-predicted and identified modal parameters of the first few modes and are also assumed to follow a predefined distribution (e.g., normal) with unknown parameters (mean and covariance) which will also be estimated as hyperparameters. The Metropolis-Hastings within Gibbs sampler is employed to estimate the updating parameters and hyperparameters. The uncertainties of structural parameters as well as error functions are propagated in predicting the modal parameters and response time histories of the building.

Mingming Song, Iman Behmanesh, Babak Moaveni, Costas Papadimitriou

Chapter 21. Scaling and Structural Similarity Under Uncertainty

Fiber reinforced composite structures require extensive experimental evaluation for validation due to their heterogeneous properties and manufacturing variability. A scaled model replicating the structural characteristics of its full-scale parent structure facilitates and expedites the assessment of the mechanical performance of large composite structures. This study primarily investigates the problems associated with the design of a scaled model and its similarity to its full-scale parent structure when there is uncertainty in the design parameters. A successful design should be robust to its assumptions and the sources of uncertainty. In this study, scaled-down composite I-beams are designed from their reference full-scale I-beam representing the spar caps and the shear web structure inside a utility-scale wind turbine blade. Similitude analysis is used in conjunction with Info-Gap theory to design scaled composite I-beams under uncertainty in the design parameters. The scaling laws for the strain field of the composite I-beam are derived and used as a metric to design scaled models that are robust to the uncertainties in the design parameters. The range of influence of uncertainty in different design parameters is investigated. The effect of uncertainty on accuracy of the scaled model in predicting the strain field of the full-scale structure is studied under uncertainty in all design parameters and the strain field of the full-scale I-beam is predicated using that of the scaled model for the worst-case scenario.

Mo E. Asl, Christopher Niezrecki, James Sherwood, Peter Avitabile

Chapter 22. Bayesian History Matching for Forward Model-Driven Structural Health Monitoring

Computer models are widely utilised in many structural dynamics applications, however their use depends on calibration to observational data. A complexity in calibrating a computer model is that even when the ‘true’ input parameters to the model are known, there may be model discrepancy caused by the simplification or absence of certain physics. As a consequence the calibration technique employed must incorporate a mechanism for dealing with model discrepancy. Bayesian history matching is a process of using observed data in order to identify and discard areas of the computer model’s parameter space that will result in outputs that are unlikely given the observational data. This is performed using an implausibility metric that encompasses uncertainties associated with observational measurements and model discrepancy. The method employs this metric to identify a non-implausible space (i.e., parameter combinations that are likely to have produced the observed outputs). A maximum a posterior (MAP) approach can be used to obtain the calibrated computer model outputs from the non-implausible space. Model discrepancy between the calibrated computer model and observational data can then be inferred using a Gaussian process (GP) regression model. This paper applies Bayesian history matching in order to calibrate a computer model for forward model-driven structural health monitoring (SHM). Quantitative metrics are used to compare experimental and predictive damage features from the combined Bayesian history matching and GP approach.

P. Gardner, C. Lord, R. J. Barthorpe

Chapter 23. Augmented Reality for Next Generation Infrastructure Inspections

Most infrastructure in America is approaching or has exceeded the intended design life. As infrastructure ages, danger is presented to the citizens and the environment. Disasters like the failure of the Oroville Dam emphasize the importance of preventing failure of infrastructure. Despite Structural Health Monitoring (SHM) emerging as the key discipline to evaluate the integrity of structures, human-based visual inspections remain the dominant technique. Main factors behind the slow adoption of the SHM technology are its high installation and maintenance costs, the low confidence level of decision makers in the technology, and the structural community’s familiarity with visual inspections. One major drawback to visual inspection is that it can be highly subjective, relying on a small number of manually recorded data points. Inspectors use tape measures, digital cameras, and human senses. These tools are inadequate to capture and document the state of infrastructure at the resolution needed for tracking damage progression. When working with infrastructure, it is desirable to have high-resolution measurement techniques to detect, localize, and characterize the state of a structure. The present work investigates Augmented Reality (AR) to minimize visual inspection drawbacks. AR is a technology that can superimpose holograms onto the real world. In this work an AR headset is used to enhance the ability of human inspectors to perform infrastructure inspections. The headset features RGB and depth cameras, accelerometers, wireless communication, microphones, and stereo sound. These sensors can capture a high-resolution 3D measurement of the infrastructure being inspected along with RGB photos that can be overlaid on the geometry. The 3D measurement can be used to analyze the state of the structure over time, and track damage progression. The 3D model can also be used as a base to overlay data captured with diverse sensors. The proposed technique will ultimately improve inspectors’ ability to assess risk on site, make informed decisions faster, analyze defect growth over time, and create high-resolution documentation of structural inspections that will reduce the variability that occurs. Overall, AR is a promising technique for infrastructure inspection, mapping infrastructure on site, generating models, and providing consistency between inspectors. This technology has considerable potential for improving the quality of infrastructure inspections.

JoAnn P. Ballor, Oscar L. McClain, Miranda A. Mellor, Alessandro Cattaneo, Troy A. Harden, Philo Shelton, Eric Martinez, Ben Narushof, Fernando Moreu, David D. L. Mascareñas

Chapter 24. A Distribution-Based Damping Estimation Method for Random Vibration Response and Its Applications

In this paper, a damping estimation method based on distribution of zero-crossing times and its applications are presented. The method is empirically derived and it is based on Rice distribution operating on probability distribution of zero-crossings. The method’s assertion is that a relationship exists between said probability distribution and the bandwidth of a mechanical system and this can be used as a means of increasing confidence in more traditional damping estimation methods. The method is applicable to responses that are due to broadband random excitation featuring a mono-harmonic response. Given that a multitude of modes will be active in aero-engines; its application has to be preceded with a suitable band-pass filter to isolate modes of interest. Such filtering is shown to affect the damping estimates. Some controlled laboratory tests are performed to verify the accuracy of the method and to study the effects of preceding filtering amongst other factors such as modal density, data length etc. The performance of the method is compared with Fourier transform based damping estimation methods. The results of application of the method to real engine measurements and carefully controlled laboratory tests are also presented.

Ibrahim A. Sever

Chapter 25. A Case Study for Integrating Comp/Sim Credibility and Convolved UQ and Evidence Theory Results to Support Risk Informed Decision Making

A case study highlighting the computational steps to establish credibility of a solid mechanics model and to use the compiled evidence to support quantitative program decisions is presented. An integrated modeling and testing strategy at the commencement of the CompSim (Computational Simulation) activity establishes the intended use of the model and documents the modeling and test integration plan. A PIRT (Phenomena Identification and Ranking Table) is used to identify and prioritize physical phenomena and perform gap analysis in terms of necessary capabilities and production-level code feature implementations required to construct the model. At significant stages of the project a PCMM (Predictive Capability Maturity Model) assessment, which is a qualitative expert elicitation based process, is performed to establish the rigor of the CompSim modeling effort. These activities are necessary conditions for establishing model credibility, but they are not sufficient because they provide no quantifiable guidance or insight about how to use and interpret the modeling results for decision making. This case study describes a project to determine the critical impact velocity beyond which a device is no longer guaranteed to function. Acceleration, weld failure and deformation based system integrity metrics of an internal structure are defined as QoIs (Quantities of Interest). A particularly challenging aspect of the case study is that predictiveness of the model for different QoIs is expected to vary. A solid mechanics model is constructed observing program resource limitations and analysis governance principles. An inventory of aleatory, computational and model form uncertainties is assembled, and strategies for their characterization are established. Formal UQ (Uncertainty Quantification) over the aleatory random variables is performed. Validation metrics are used to evaluate discrepancies between model and test data. At this point, the customers and the CompSim team agree that the model is useful for qualitative decisions such as design trades but its utility for quantitative conclusions including demonstration of compliance with requirements is not established. Expert judgment from CompSim SMEs is elicited to bound the effects of known uncertainties not currently modeled, such as the effect of tolerances, as well as to anticipate unknown uncertainties. The SME judgement also considers the expected accuracy variation of the different QoIs as recorded by previous organizational history with similar hardware, gaps identified by the PIRT, and completeness of PCMM evidence. Elicitation of the integrated team consisting of system engineering and CompSim practitioners results in quantified requirements expressed as ranges on acceptance threshold levels of the QoIs. Evidence theory is applied to convolve quantitative and qualitative uncertainties (aleatory UQ, numerical, model form uncertainties and SME judgement) resulting in belief and plausibility cumulative distributions at several impact velocities. The process outlined in this work illustrates a structured, transparent, and quantitative approach to establishing model credibility and supporting decisions by an integrated multi-disciplinary project team.

G. Orient, V. Babuska, D. Lo, J. Mersch, W. Wapman

Chapter 26. Material Parameter Identification and Response Prediction of Shearing Process for Flying Shear Machine Based on Model Validation

This paper studies on the simulation of a certain type of flying shear machine’s the shearing process. In the finite element simulation, chip formation and cutting ability is not only affected by the high temperature of bar itself and impact velocity of cutter, but also affected by the material itself stress-strain curve under different strain rate and fracture model. Even if the same temperature and impact velocity for high temperature bar, due to the uncertainty of different parameters under different strain rate stress-strain curves, different fracture model, different critical damage factor, the maximum shear force and shear punch depth are not the same. Therefore, in order to obtain a more accurate finite element model and response prediction, it is necessary to identify the uncertainties of the parameters for the material constitutive model and fracture criterion.According to the principle of equivalent energy, the research group designed a kind of falling hammer punching test rig, and the high temperature (700–900°°C) bars of 1Cr18Ni9Ti, ϕ10 and ϕ20 bars experiments were conducted with multiple groups of shock shear tests. With the aid of the data acquisition instrument, the acceleration parameter of impact shearing process is collected. At the same time, for punching and shearing test numerical simulation was conducted based on nonlinear metal forming finite element analysis.By ϕ20 bars punching test results and simulation results and model updating method, the parameter identification method about stress-strain curve under different strain rate and critical damage factor in material fracture criterion of high temperature bar in punching process is studied. And then the prediction for shearing process of ϕ10 was verified by comparing with the test result. After obtaining reasonable and accurate material parameters, for the real flying shear machine, the numerical simulation of ϕ160 high temperature bar is carried out under the equivalent impact mass and shear speed. The parameter identification method has practical significance to predict and optimize the shearing performance of different types of flying shear for shearing section steel with different materials and different sections. The results show that: the validation method based on the combination of test data and model updating is effective, which can be applied to discriminate and predict material parameters of similar structures of shearing high temperature bar in engineering.

Hongbo Huang, Qintao Guo, Mingli Yu, Yanhe Tao, Yelan Wang, Ming Zhan

Chapter 27. Probabilistic Maintenance-Free Operating Period via Bayesian Filter with Markov Chain Monte Carlo (MCMC) Simulations and Subset Simulation

This paper presents a probabilistic approach via Bayesian-filter (BF) with Markov chain Monte Carlo (MCMC) simulations and subset simulation (SS), to determine the probabilistic maintenance-free operating period (MFOP) for probabilistic lifing assessment of aircraft fatigue critical components. State transition function representing virtual damage growth of a component and measurement function representing the SHM measurements of the component are defined. State transition function is described by a typical Paris equation for fatigue crack propagation. Measurement functions are assumed in this study, which describe the relationship between the damage features derived from SHM signals and the damage sizes. Damage tolerance (DT) and risk-based methodologies are used for fracture-based reliability assessment. Random samples for posterior joint probability density function of initial flaw size and crack growth rate are generated with information obtained through structural health monitoring (SHM) systems. Subset simulation (SS) is used in conjunction with MCMC in order to determine the small probability of failure with high efficiency. The results have shown that the MCMC-SS combined methodology is two orders of magnitude more efficient than that of MCMC alone.

Michael Shiao, Tzi-Kang Chen, Zhu Mao

Chapter 28. Bayesian Model Updating of a Damaged School Building in Sankhu, Nepal

This paper discusses Bayesian model updating of a damaged four-story masonry-infilled reinforced concrete structure using recorded ambient vibration data. The building, located in Sankhu, Nepal, was severely damaged during the 2015 Gorkha earthquake and its aftershocks. Ambient acceleration response of the structure was recorded with an array of 12 accelerometers following the earthquake. An output-only system identification method is deployed to extract modal parameters of the building including natural frequencies and mode shapes from the collected ambient vibration data. These dynamic properties are used to calibrate the finite element model of the building which is used to simulate the response during the earthquake. The initial three-dimensional finite element model is created using in-situ inspections. The goal of the Bayesian model updating procedure is to estimate the joint posterior probability distribution of the updating parameters, which are considered as the stiffness of different structural components. The posterior probability distribution is estimated based on the prior probability distribution of these parameters as well as the likelihood of data. The error function in this study is defined as the difference between identified and model-predicted modal parameters. The posterior distributions are estimated using the Markov Chain Monte Carlo stochastic simulation method. Ultimately, the stiffness values are estimated using the Bayesian model updating approach are compared with those from deterministic model updating conducted previously.

Mehdi M. Akhlaghi, Supratik Bose, Babak Moaveni, Andreas Stavridis

Chapter 29. Interpreting the Eigenbasis of Principal Component Analysis to Identify Design Space Regions of Interest

Computational models are a valuable tool for predicting the behavior of complex systems in regimes that are difficult, or infeasible to test. Uncertainties in a model resulting from lack of knowledge regarding parameters needed to define system behavior raise concern in simulations predictions, particularly those in extreme operating regimes. In such cases, calibration and validation of numerical models across a large operational domain is beneficial, but creates a high dimensional domain in which the model must be tested. Statistical decomposition methods present an ability to reduce the dimensionality of a measured or simulated quantity while maintaining relevant correlations across the domain, therefore providing insight to important relationships between input parameters and system response throughout many loading scenarios. Principal Component Analysis (PCA) is a commonly used statistical decomposition method used to produce a low order projection from a higher dimensional space. Herein, we evaluate a specific component of PCA, the eigenvectors providing statistical weighting to explanatory variables, to identify variable model sensitivity across high dimensional operational domains. The concept is demonstrated by simulation of an elasto-plastic material undergoing tensile testing at variable loading conditions. Results indicate that the eigenbasis reveals distinct relationships between a model’s behavior and uncertain parameters, even when the relationship is not readily apparent and without relying on any a priori knowledge of system or material behavior.

B. Daughton, P. Alexeenko, D. Alexander, G. N. Stevens, E. M. Casleton

Chapter 30. Uncertainty Quantification in Nanoscale Impact Experiment in Energetic Materials

Finite element method is extensively used for the analysis of impact response in complex materials. The prediction from finite element model may exhibit significant difference from that of experiments due to uncertainties in model, experimental measurements, and parameters that are derived based on experiments for model development. The quantification of parametric uncertainties, such as parameters in constitutive relation, associated with the numerical model is an important aspect that needs to be investigated for a credible computational prediction. This work considers uncertainty quantification in finite element modeling of nanoscale dynamic impact problems. A viscoplastic power law constitutive model is obtained from nanoscale impact experiments on Hydroxyl-terminated polybutadiene (HTPB)-Ammonium Perchlorate (AP) samples. The constitutive model is used in a finite element model to simulate impact experiments. The measured response from impact experiment and FEM simulation is used to quantify the parametric uncertainties in the constitutive model for the analyzed HTPB-AP sample.

Chandra Prakash, I. Emre Gunduz, Vikas Tomar

Chapter 31. Analysis of Contact Dynamics Using Controlled Impact Excitations

In the current work, the contact dynamics during impact is analyzed using a repeatable impact excitation system. In general, accurate description of force-deformation relationship during a dynamic impact event necessitates using semi-empirical or empirical models. The validity of these models under different conditions (the shape, material, surface condition, and impact velocity of the impacting bodies) depends on accurate experimental calibration of model parameters. As such, an effective and reproducible calibration approach is critical to obtaining effective contact dynamics models. The prevailing calibration approaches for impact dynamics focus on identifying only one of those parameters, the coefficient of restitution (CoR), and only for limited range of impact conditions. Estimating the contact stiffness and the power law exponent requires direct, precise and simultaneous measurement of force and sub-micron level deformation (indentation). Furthermore, calibration and modeling uncertainty must be well understood to determine the applicability of the contact-dynamics models. To address these challenges, the objective of the current work is to use a repeatable impact excitation system (IES) and interferometry to enable calibration of contact dynamics models. The IES enables precise control of impact velocities, and high-resolution interferometric (Laser Doppler Vibrometer, LDV) measurement of impacted surface and impact tip provides accurate determination of the associated deformations. After describing the approach in detail, we present demonstrative case studies on stainless steel and aluminum surfaces, and obtain the calibration parameters for different models presented in the literature. During the calibration, the force and deformations levels are varied by controlling the velocity of the impact. Furthermore, an uncertainty quantification is conducted to determine uncertainties of determined parameters to better identify the validity of the calibrated model, as well as the capability of the approach. It is concluded that the presented approach is an effective means of calibrating and validating various contact dynamics models at different impact velocities.

Shivang Shekhar, Sudhanshu Nahata, O. Burak Ozdoganlar

Chapter 32. Extraction of Coupling Stiffness of Specimens Printed with Selective Laser Melting Using Modal Analysis

Modal analysis is an affordable form of nondestructive evaluation (NDE) for many forms of manufacturing. Developments in Additive Manufacturing (AM) have enabled the printing of sophisticated metal parts in processes like Selective Laser Melting (SLM). In most metallic AM processes fabrication is conducted on a build plate, which allows for convenient fixturing for model analysis. However, many build plate contain multiple parts that introduces the challenge of dynamic coupling where the dynamic characteristics of other specimens appear in the frequency analysis of the specimen that is being analyzed. The dynamic coupling can obscure the analysis, especially for prints with multiple identical parts. This work sets a foundation for a method to improve the modal analysis of multiple AM parts fabricated on a single build plate by estimating the coupling stiffness between two specimens with similar modal characteristics.

Brian West, Nicholas E. Capps, James S. Urban, Troy Hartwig, Ben Brown, Douglas A. Bristow, Robert G. Landers, Edward C. Kinzel

Chapter 33. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

The Space Launch System, NASA’s new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are “rules-of-thumb;” this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation metric will provide a quantified confidence and probability of success for the final SLS dynamics model, which will be critical for a successful launch program, and can be applied in the many other industries where an accurate dynamic model is required.

Andrew M. Brown, Jeffrey A. Peck, Eric C. Stewart

Chapter 34. Natural Frequency Testing and Model Correlation of Rocket Engine Structures in Liquid Hydrogen: Phase I, Cantilever Beam

Many structures in the launch vehicle industry operate in liquid hydrogen (LH2), from the hydrogen fuel tanks through the ducts and valves and into the pump sides of the turbopumps. Calculating the structural dynamic response of these structures is critical for successful qualification of this hardware, but accurate knowledge of the natural frequencies is based entirely on numerical or analytical predictions of frequency reduction due to the added-fluid-mass effect because testing in LH2 has always been considered too difficult and dangerous. This fluid effect is predicted to be approximately 4–5% using analytical formulations for simple cantilever beams. As part of a comprehensive test/analysis program to more accurately assess pump inducers operating in LH2, a series of frequency tests in LH2 are being performed at NASA/Marshall Space Flight Center’s unique cryogenic test facility. These frequency tests are coupled with modal tests in air and water to provide critical information not only on the mass effect of LH2, but also the cryogenic temperature effect on Young’s Modulus for which the data is not extensive. The authors are unaware of any other reported natural frequency testing in this media. In addition to the inducer, a simple cantilever beam was also tested in the tank to provide a more easily modeled geometry as well as one that has an analytical solution for the mass effect. This data will prove critical for accurate structural dynamic analysis of these structures, which operate in a highly-dynamic environment.

Andrew M. Brown, Jennifer L. DeLessio, Preston W. Jacobs

Chapter 35. Optimal Maintenance of Naval Vessels Considering Service Life Uncertainty

Life-cycle analysis is increasingly being used to assess the structural performance and total cost incurred during the service life of naval vessels. Decision making on maintenance scheduling is based on minimizing life-cycle cost and risk metrics. However, existing life-cycle frameworks generally consider deterministic service life in the analysis process. Due to budgetary reasons, naval vessels are often required to extend their service life beyond the originally designed life. Management plans based on deterministic service life concepts do not consider the uncertainties associated with service life assessment and prediction. Structural aging can have significant effects on operational costs and risks during the service life extension. This paper presents the modeling of service life uncertainty and its impacts on the life-cycle management of naval vessels. The proposed framework is aimed to provide robust maintenance strategies addressing service life uncertainties. The presented approach is applied to fatigue of a high speed naval vessel.

Yan Liu, Dan M. Frangopol

Chapter 36. On the Monitoring-Driven Assessment of Engineered Systems

The life-cycle management of structural systems operating under diverse loads involves the tasks of simulation (forward engineering), identification (inverse engineering) and maintenance/control actions. The efficient and successful implementation of these tasks is however non-trivial, due to the ever-changing nature of these systems, and the variability in their interactive environment. Two defining factors in understanding and interpreting such large-scale systems are nonlinear behavior and structural uncertainty. The former is related to the external dynamic loading that might shift the structural response from purely linear to nonlinear regimes, while the latter is related to erroneous modeling assumptions, imprecise sensory information, ageing effects, and lack of a priori knowledge of the system itself. This paper discusses implementation of methods and tools able to tackle the aforementioned challenges. Among other topics, the use of surrogate models and Bayesian-type filters for the reduced representation and identification of uncertain and nonlinear structural systems is discussed.

Eleni N. Chatzi, Vasilis K. Dertimanis
Additional information

Premium Partner

image credits