Skip to main content
main-content
Top

Hint

Swipe to navigate through the chapters of this book

2019 | OriginalPaper | Chapter

15. Light Field Imaging of Three-Dimensional Structural Dynamics

Authors: Benjamin Chesebrough, Sudeep Dasari, Andre Green, Yongchao Yang, Charles R. Farrar, David Mascareñas

Published in: Structural Health Monitoring, Photogrammetry & DIC, Volume 6

Publisher: Springer International Publishing

share
SHARE

Abstract

Real world structures, such as bridges and skyscrapers, are often subjected to dynamic loading and changing environments. It seems prudent to measure high resolution vibration data, in order to perform accurate damage detection and to validate and update the models and knowledge about the operating structure (aka finite element models). Many existing vibration measurement methods could be either low resolution (e.g., accelerometers or strain gauges), and time and labor consuming to deploy in field (e.g., laser interferometry). Previous work by Yang et al. has shown that low-cost regular digital video cameras enhanced by advanced computer vision and machine learning algorithms can extract very high resolution dynamic information about the structure and perform damage detection at novel scales in an relatively efficient and unsupervised manner. More interestingly this work used a machine learning pipeline that made minimal assumptions about lighting conditions or the nature of the structure in order to perform modal decomposition. The technique is currently limited to two dimensions if only one digital video camera is used. This paper uses light field imagers - a new camera system that captures the direction light entered the camera - to make depth measurements of scenes and extend the modal analysis technique proposed in Yang et al. to three dimensions. The new method is verified experimentally on vibrating cantilever beams with out of plane vibration, whose full-field modal parameters are extracted from the light field measurements. The experimental results are discussed and some limitations are pointed out for future work.
Literature
1.
go back to reference Lumsdaine, A., Georgiev, T.: The focused plenoptic camera. IEEE. 1–8, San Francisco, CA (2009) Lumsdaine, A., Georgiev, T.: The focused plenoptic camera. IEEE. 1–8, San Francisco, CA (2009)
4.
go back to reference Yang, Y., et al.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017) CrossRef Yang, Y., et al.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017) CrossRef
5.
go back to reference Zeller, N., Quint, F., Stilla, U.: Depth estimation and camera calibration of a focused plenoptic camera for visual odometry. ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016) CrossRef Zeller, N., Quint, F., Stilla, U.: Depth estimation and camera calibration of a focused plenoptic camera for visual odometry. ISPRS J. Photogramm. Remote Sens. 118, 83–100 (2016) CrossRef
Metadata
Title
Light Field Imaging of Three-Dimensional Structural Dynamics
Authors
Benjamin Chesebrough
Sudeep Dasari
Andre Green
Yongchao Yang
Charles R. Farrar
David Mascareñas
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-319-74476-6_15

Premium Partners