Skip to main content
Top
Published in: Experiments in Fluids 8/2021

Open Access 01-08-2021 | Research Article

Advanced iterative particle reconstruction for Lagrangian particle tracking

Authors: Tobias Jahn, Daniel Schanz, Andreas Schröder

Published in: Experiments in Fluids | Issue 8/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The method of iterative particle reconstruction (IPR), introduced by Wieneke (Meas Sci Technol 24:024008, 2013), constitutes a major step toward Lagrangian particle tracking in densely seeded flows (Schanz et al. in Exp Fluids 57:1–27, 2016). Here we present novel approaches in several key aspects of the algorithm, which, in combination, triple the working range of IPR in terms of particle image densities. The updated method is proven to be fast, accurate and robust against image noise and other imaging artifacts. Most of the proposed changes to the original processing are easy to implement and come at low computational cost. Furthermore, a bundle adjustment scheme that simultaneously updates the 3D locations of all particles and the camera calibrations is introduced. While the particle position optimization proved to be more effective using localized ‘shake’ schemes, this so-called global shake scheme constitutes an effective measure to correct for decalibrations and vibrations, acting as an in-situ single-image volume-self-calibration. Further optimization strategies using such approaches are conceivable.

Graphic abstract

Notes
This paper presents an in-depth analysis of the working scheme and the performance of advanced IPR in different imaging conditions.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Determining the three-dimensional position of small, point-like objects from their projection on several (few) cameras has been the object of many developments in the last decades. Particularly in the field of volumetric optical flow measurements, various schemes for the accurate reconstruction of particle positions were explored. Here, the flow of a fluid—otherwise invisible—is revealed by adding small tracer particles (‘seeding the flow’). The projections of the particles are recorded by their scattered light over several time-steps with multiple (typically 3–8) cameras. Finally, the flow field is estimated from their imaged motion. Depending on the examined flow and the equipment, the number of recorded consecutive time-steps varies from two (double-pulse recordings), over 4–8 (multi-pulse recordings) to sometimes hundreds of thousands (fully time-resolved recordings).
Potentially, the higher the concentration of the seeding the better the spatial resolution of the reconstructed flow field. The particle image density NI is typically expressed in terms of ‘particles per pixel’ (ppp) on a single camera. While high particle numbers are desirable, the reconstruction process gets increasingly difficult and a growing number of non-existing particles (ghost particles) arises in the reconstruction due to the underdetermined nature of the problem. This process heavily depends on the number of imaged particles (NP), the diameter of the imaged particles (or, more generally, the image source density), the number of cameras (NC) and the resolution of each camera (Elsinga et al. 2011).
For the three-dimensional reconstruction of the particle positions, as well as the estimation of the underlying flow field, several techniques have been developed. First approaches used manual tracking of singular particles, dating back to the beginning and mid of the twentieth century (Nayler and Frazer 1917, Chiu and Rib 1956). The first attempts of three-dimensional particle tracking velocimetry (3D-PTV, e.g., Nishino et al. 1989) applied triangulation using epipolar lines (Maas et al. 1993) and connected the reconstructed particle clouds of each time step to Lagrangian particle trajectories by several approaches, e.g., minimizing the acceleration over four consecutive frames (Malik et al. 1993). The triangulation process is susceptible to the reconstruction of ghost particles. In order to minimize false particle tracks, 3D PTV was typically applied on images with NI < 0.005 ppp (at NC = 4, which is used as the standard case for the rest of manuscript), which for most flows does not yield instantaneous results at sufficient spatial resolution, therefore limiting the attainable results to Lagrangian and average flow properties.
The desire to resolve instantaneous flow structures volumetrically led to the development of tomographic particle image velocimetry (Tomo PIV, Elsinga et al. 2006). The approach to tomographically reconstruct the volume in a so-called voxel-space (Atkinson and Soria 2009), followed by a 3D cross-correlation, was later extended by methods using the temporal information, like MTE (Novara et al. 2010) or SMTE (Lynch and Scarano 2015). Hybrid approaches, combining aspects of tomographic and particle-based reconstruction, emerged (Schröder et al. 2011; Novara and Scarano 2013; Cornic et al. 2020).
Due to the multiplicative nature of the algorithm, the accuracy of the camera system calibration with a (multi-plane) calibration target proved to be not sufficient for the tomographic reconstruction. Volume self-calibration (VSC, Wieneke 2008) uses triangulation of a sparse ensemble of particles to correct global and local decalibrations of the camera system, decreasing the error from \(N\)(1 pixel) to below 0.1 pixel (px). Building upon VSC, a calibration procedure of the optical transfer function (OTF) (Schanz et al. 2013a) of the particles was developed, accounting for the often-varying imaging conditions across the measurement volume between the different cameras and within a single camera frame.
The introduction of the iterative particle reconstruction (IPR, Wieneke 2013) overcame the limitations of 3D-PTV and was a first major step toward performing Lagrangian particle tracking (LPT) with high particle image densities (Schanz et al. 2013b) comparable to Tomo PIV. IPR represents an iterative approach to the triangulation procedure, with an intermediate position optimization and a corresponding removal of apparent ghost particles. These measures allowed to reconstruct particle point clouds from images initially with NI ≤ 0.05 ppp with reduced occurrence of ghost particles and higher position accuracies, compared to a MART reconstruction. When further increasing NI, the results deteriorate rapidly, when processing single images. The position optimization (here also termed ‘shaking’ due to the process of moving the particles around in 3D space) is performed for each particle individually using image matching techniques; hence, the calibration demands are even higher, compared to Tomo PIV. An accurate VSC and OTF calibration is mandatory for IPR.
Both VSC and OTF calibration rely on determining 3D particle positions, for which they apply triangulation. As the particle position information is intrinsically available within IPR, it is conceivable to substitute the separate determination of VSC and OTFs with an integrated process, where the calibration and the particle imaging are determined along the optimal particle positions within a bundle adjustment process. This solution allows to dynamically adapt to changing imaging conditions, induced by, e.g., vibrations or varying density gradients. A first step in this direction is introduced in this publication, applying an LBFGS algorithm to simultaneously optimize particle positions and correct for camera vibrations.
IPR is actively used in the LPT method ‘Shake-The-Box’ (STB, Schanz et al. 2016), which uses an elaborate predictor/corrector scheme to integrate the temporal domain into the reconstruction process. It has been shown that this scheme converges to a solution where the vast majority of particles is tracked, ghost particles are almost non-existent and—due to the absence of ghost particles—a high position accuracy is attained. This process has been shown to work at particle image densities up to NI = 0.2 ppp for synthetic data (Raffel et al. 2018). Recent international competitions (Kähler et al. 2016; van Gent et al. 2017) show distinct advantages over TOMO-PIV in terms of accuracy, spatial resolution, reconstruction completeness and computer resources. Furthermore, Lagrangian properties (such as accelerations/material derivatives) needed for, e.g., pressure estimation schemes are directly available (van Gent et al. 2017).
Experimentally, STB has been applied at particle image densities NI > 0.1 ppp (Huhn et al. 2017; Bosbach et al. 2019). Recently, an open-source version of the algorithm was developed (Tan et al. 2020). For high-speed flows, where time-resolved recordings cannot be conducted, the method was adapted to a multi-pulse reconstruction strategy (Novara et al. 2016a, 2019a, b). A joint optimization of two time-steps proved to improve the reconstruction quality for both instances (Kähler et al. 2016; Lasinger et al. 2019).
The availability of highly accurate and sufficiently dense particle trajectories spawned the development of data assimilation schemes, which incorporate constraints from flow physics into the interpolation procedures, thus enhancing the spatial resolution beyond the sampling provided by the particles. Two examples for such data assimilation methods are the FlowFit algorithm (Gesemann et al. 2016) and the VIC+—scheme (Schneiders and Scarano 2016).
With the IPR algorithm as one key element of the most recent advances in LPT, it is evident that an improved IPR performance will have a direct impact onto the reconstruction and tracking performances of several recent methods. The present publication introduces a number of advancements of the IPR idea in several key domains.
Section 2 discusses some prerequisites needed by the IPR processing. Section 3 describes the working principle of IPR, gives in-depth introductions to all main steps and highlights the novel aspects introduced in this work. Section 4 introduces a synthetic test case, presents the results of our advanced IPR when reconstructing the generated images with varying imaging conditions, quantifies the effect of each updated approach and discusses the influence of major parameters. Section 5 demonstrates the capabilities of the novel ‘global shake’ approach to instantaneously correct decalibrations of the camera system as part of the reconstruction. Section 6 lists various experimental investigations, which applied the advanced IPR processing. Finally, Sect. 7 summarizes the findings and gives an outlook onto further ideas to improve the particle reconstruction.

2 Prerequisites

The underlying problem IPR aims to solve is to find a 3D particle distribution \(\left( {x,y,z,I} \right)_{i}\) for \(i = 1 \ldots N_{{\text{P}}}\), that explains a set of measured particle images, simultaneously recorded from multiple cameras from different directions. Here, x, y and z are the three spatial coordinates, I is the particle intensity and NP the number of particles. To optimally solve this problem, IPR relies on precise knowledge of the viewing directions of all cameras and the imaging properties of the particles on each camera (optical transfer function).
Given a state vector \(\left( {x,y,z,I} \right)_{i}\), a camera mapping function \(\left( {u_{i} ,v_{i} } \right) = {\text{cam}}\left( {\left( {x,y,z} \right)_{i} } \right)\) and a calibrated 2D particle image shape in dependence on the 3d coordinate \({\text{shape}}_{i} = {\text{otf}}\left( {\left( {x,y,z} \right)_{i} } \right)\), the reprojected camera image can be calculated by adding up the images of all particles:
$${\text{rep}}_{uv} = \mathop \sum \limits_{i} I_{i} {\text{shape}}_{i} \left( {u - u_{i} ,v - v_{i} } \right)$$
(1)
with \(\left( {u,v} \right)\) being the 2D image coordinates. Due to the finite spatial expansion of the particle’s shapes, it is sufficient to only sum nearby particles for each pixel \(uv\) (a typical particle image diameter is 3–6 pixel). To find \(\left( {x,y,z,I} \right)_{i}\), cam and otf in a way that repuv optimally assembles the measured images imguv, the following concepts are applied:

2.1 Camera model and calibration

IPR only reconstructs the particle distribution \(\left( {x,y,z,I} \right)_{i}\) and therefore relies on prior knowledge of the camera system. Typically, we aim at reconstructing the particle positions with accuracies below 0.1 px (much smaller for noise-free synthetic data). Therefore, we have high requirements on the calibration of the camera system, which are typically met by performing a volume-self-calibration (VSC, Wieneke 2008) on top of the geometrical calibration using a 3D calibration plate.
The gained correspondences of world coordinates to image coordinates are used to optimize the parameters of a camera model. This model represents the camera cam as a function that projects from 3D world-coordinates onto the 2D image coordinates, as well as a function cam−1 that projects from 2D image coordinates onto 3D line-of-sights (represented by two points lying on said line). Further, the Jacobian matrix J with the derivatives of the camera function is required for the optimization of the IPR particles and the calibration of the cameras.
The actual implementation of the camera model can be freely chosen, as long as the aforementioned properties are fulfilled. For the synthetic test cases below, we use a perspective camera (akin to a pin-hole model). For experimental cases, such a model can be amended with second- and/or fourth-order correction terms.

2.2 Optical transfer function (OTF)

The images of illuminated particles typically vary in shape and intensity, depending on their position within the measurement volume and the different viewing angles of the cameras. The attainable position accuracy of an image matching process highly depends on the conformity of the two involved images—the re-projected—and the original particle image. Therefore, the imaging properties of the particles are calibrated using the methods described in (Schanz et al. 2013a). The resulting average shapes of the particle images within several sub-volumes are modeled as a 2D Gaussian with 4 parameters for each camera:
$${\text{otf}}\left( {u,v} \right) = I_{a,r} e^{{ - 0.5\left( {u^{2} a + 2buv + cv^{2} } \right)}}$$
(2)
The parameters \(a\), \(b\) and \(c\) encode the circular or elliptical shape and orientation of the Gaussian and the parameter \(I_{a,r}\) the average brightness of the particles on the current camera, relative to the other cameras. Since the particle images are assumed to be diffraction limited, the four parameters are solely dependent on the particles 3D position and the local OTF of the respective camera.
Within the IPR processing, the calibrated shape parameters are used as kernels around the projection points of the reconstructed particles while creating the projected images [as per Eq. (1)].

2.3 Rendering of reprojected and residual images

The task to optimally reproduce the measured image imguv by a reprojection of a 3D particle cloud is tackled by reducing the pixel-wise squared distance between imguv and repuv for each camera with as few and as bright particles as possible. To that end, we define the residual image resuv as the pixel-wise difference between imguv and repuv:
$${\text{res}}_{uv} = {\text{img}}_{uv} - {\text{rep}}_{uv}$$
(3)
With this residual image, the cost function on camera j becomes
$${\text{cost}}_{j} = \mathop \sum \limits_{uv} \left| {{\text{img}}_{uv} - {\text{rep}}_{uv} } \right|^{2}$$
(4)
and the total cost of the reconstruction is the sum of the costs of each camera
$${\text{cost}} = \mathop \sum \limits_{j} {\text{cost}}_{j} .$$
(5)
In an ideal situation, resuv would be completely void of residuals of particles already contained in the state vector. However, due to particle imaging issues (noise, deviation of a particle image from the calibrated OTF, non-Gaussian particle images, particles in outer regions only visible by a subset of cameras, etc.) and due to the existence of ghost particles (at least in the first iterations of IPR), residuals of the images of already reconstructed particles are common.

3 Iterative particle reconstruction

The IPR method is an iterative procedure for the accurate reconstruction of particle positions. At the center of the IPR is the state vector \(\left( {x,y,z,I} \right)_{i} ,\) containing the interim estimation for the position \(x,y,z\) and intensity I of each of the NP presumed particles. During the reconstruction, this state vector gets iteratively replenished with potential missing particles, purged from potential ghost particles and optimized in terms of position and intensity to best match the observed measurement.
The IPR workflow is depicted in Fig. 1 and the individual steps are described below:
  • All peaks present on the current images (being the original measurements in the first iteration and residuals in all following iterations) are detected using a 2D peak detection scheme.
  • Potential 3D particle positions are triangulated from the found 2D peaks and added to the state vector. Due to ambiguities, inaccuracies of the peak-detection, noisy images and overlapping particle images, the newly added particles might exhibit a significant fraction of ghost particles and suffer from positional inaccuracies.
  • The position and intensity of each particle are optimized individually to best match the measured images on all cameras simultaneously using image matching methods.
  • The state vector is filtered for potential ghost particles, mainly based on the particle intensity (deleting particles that fall below an intensity threshold).
  • The current interim solution is rendered by projecting all particles onto virtual camera images, as given by the calibration. These projected images IP are subtracted from the recorded measurement Irec, yielding the residual images IR.
The process starts anew with peak detection in IR. Steps 1–5 are repeated until convergence is reached or for a predefined number of iterations. The key updates to the IPR process introduced in this work are:
1.
For the image matching procedure (shaking), the numerical estimation of the derivatives is substituted by analytical derivations of the cost function. This step enables a direct determination of the Gradient (and the Hessian), thereby limiting the number of required evaluations of the cost function (see Sects. 3.3 and 4.9).
 
2.
It is demonstrated that the completeness of the triangulation process benefits from using several permutations of the camera order, as the triangulation result strongly depends on the first two cameras and their relative angle (see Sects. 3.2.1 and 4.6)
 
3.
Instead of using a fixed triangulation error ε for all iterations of the IPR procedure, a linearly increasing value of ε is applied, which aids in suppressing the generation of ghost particles, as shown in Sects. 3.2.2 and 4.8
 
4.
It is shown that ghost particles can be very effectively suppressed by excluding the camera(s) in which a particle appears the brightest for the optimization of its intensity (see Sects. 3.3.2 and 4.7)
 
5.
In addition to the particle-wise optimization procedure (shaking each single particle independently), an ensemble approach is presented, which optimizes the position of all particles at once (‘global shake’). Additionally, this procedure can be used to optimize the camera positions, thus representing a mean to perform an in-situ single image calibration correction (see Sects. 3.6 and 5).
 
The five main steps of the IPR method and the changes introduced in this publication are described in-depth in Sects. 3.1 to 3.5. Section 3.6 introduces the global shake method.

3.1 Peak detection

The detection of particle images on the camera images is carried out using a custom procedure that understands all pixels of the image as the parameters of 2D B-splines and therefore effectively smooths the image. While the smoothing reduces noise the splines also provide an interpolation between the pixels of the image and spatial derivatives. The peak-detection iterates over the B-spline grid looking for bright spots above a certain threshold Pmin. For every bright spot, a Levenberg–Marquardt algorithm (Moré 1978) is used to find the associated maximum with subpixel accuracy (approximately on the order of 0.1 px). Further, it estimates the shape of each found peak, suitable for calibrating the OTF.

3.2 Triangulation

From the peaks detected on the (residual) images of the different cameras, the 3D positions of the underlying particle cloud have to be estimated. To this end, given a certain peak on one camera, the corresponding peaks on the other cameras, belonging to the same particle, have to be identified. This can be achieved by the triangulation procedure, which is based on epipolar geometry and estimates a set of possible 3D particle positions given detected 2D particle peaks from a set of multiple cameras (see Fig. 2 for a scheme).
Several effects limit the abilities of the triangulation procedure:
  • With increasing particle image densities, the reconstruction of false (ghost) particles rises significantly.
  • Overlapping particle images and image noise inhibit a precise localization of 2D peaks, leading to real particles not being triangulated or being displaced, even if only one camera is affected.
  • In case the first two cameras are separated by only a small space angle, the projected line of sight becomes very short and the accuracy in the depth direction is compromised, inhibiting a detection of the peak on the other camera images
The IPR procedure as introduced by Wieneke (2013) tries to counteract some of the mentioned drawbacks:
  • The iterative approach of triangulating on residual images and shaking the particles to optimal positions lessens the effects of overlapping particle images and noise. Particles that could not be triangulated in the initial iterations might appear later, once other particles, whose images caused an overlap-situation, have been triangulated and their images have been removed from the residual image. Then all particles can be optimally positioned by the shaking procedure.
  • In order to lessen the problem of overlapping particles, Wieneke (2013) introduced some final triangulation iterations, which operate on a reduced set of cameras (a particle needs to be visible only on NC − 1 cameras, instead of the full number of cameras NC). This way, particles whose image is compromised on a single camera (noise, overlap with other particle images) are still being triangulated. The drawback is that the creation of ghost particles is amplified. Therefore, this measure was only applied during the last IPR iterations, where the residual images already show a heavily reduced particle image density.
Here, we introduce two additional measures to improve the triangulation results: performing multiple triangulations, each working with a permutation of the camera order and a linear increase of the allowed triangulation error ε:

3.2.1 Camera permutations

As indicated before, the first two cameras have a special role in the triangulation process. The space angle between these cameras determines how accurate the 3D position can be estimated, which is used to query the other cameras. Also, in case the currently investigated peak is overlapping with another one on any of these two cameras, the accuracy of the assumed 3D position can be heavily corrupted. With increasing particle number, a successful identification of the majority of particle images becomes less and less likely due to particle image overlap (Michaelis et al. 2010; Cierpka et al. 2013). Furthermore, for a particle to be recognized, its image needs to be identified in both of the first two cameras. This can be especially problematic, if certain regions—or even the whole image—show reduced intensity on cameras 1 or 2 (due to, e.g., an unfavorable viewing angle).
To counteract all these effects, we substitute a typical triangulation, using a fixed camera order (e.g., 1–2–3–4), with a row of triangulations with permuted camera order, such that all camera combinations appear once on first and second position (e.g., 1–2–3–4; 2–3–4–1; 3–4–1–2; 4–1–2–3; 1–3–2–4; 2–4–3–1). The order of the further cameras is of no relevance. Only such particles are accepted that do not have a direct neighbor in the cloud of already triangulated particles; typically, a minimum distance of 1 px is applied. This way, multiple triangulations of the same particle are avoided. In case of experimental setups using many cameras (e.g., 6 or more), the number of triangulations could be limited by manually selecting a subset of permutations, based on the space angle between the first and the second camera in each respective permutation.
The effect of the permutation procedure is documented in Sect. 4.6

3.2.2 Linear increase of triangulation error

The IPR implementation as introduced by Wieneke (2013) applies the same value for the allowed triangulation error ε for all IPR iterations. Instead, we apply a ramp of ε, linearly increasing within the NIPR outer iterations: \(\varepsilon \left( i \right) = \varepsilon \left( 1 \right) + i/\left( {N_{{{\text{IPR}}}} - 1} \right)*\Delta \varepsilon\), with \(\Delta \varepsilon = \varepsilon \left( {N_{{{\text{IPR}}}} } \right) - \varepsilon \left( 1 \right)\). The start and end-values, ε(0) and ε(NIPR) can be defined according to the experimental conditions. This approach minimizes the creation of ghost particles, especially at the first iterations where the residual image is still very populated and allows to work with a reduced set of cameras for all iterations, reducing the number of parameters. The effect of this procedure is documented in Sect. 4.8

3.3 Optimization of position and intensity (shaking particles)

The triangulation process is limited by the accuracy of the peak detection and does not suppress ghost particles. Therefore, the positions and intensities of the particles are further optimized using image matching schemes, which iteratively shift the position and intensity of the particles to minimize the difference between the original camera images and the reprojection of the particles (being the residual images).
The original IPR as introduced in (Wieneke 2013) minimizes the cost function by iteratively optimizing the position \(x_{i} ,y_{i} ,z_{i}\) and intensity Ii of each particle individually by using a procedure roughly equivalent to a Newton-fitting algorithm with numerically determined first and second derivatives for \(x_{i} ,y_{i} ,z_{i}\) and a redetermination of the current optimal intensity Ii in each step. To this end, each single particle is displaced in all three directions of space by ± 0.1 pixels and the local cost is determined on all of these positions. A second-order polynomial fit is applied to the residuals and the particle is moved to the positions of minimal residual (with a maximum step of 0.1 px) in each direction of space.
This approach requires to estimate the local cost around each particle 7 times per camera in each iteration. Depending on the image size and the form of the OTF, this can be an expensive operation. Therefore, it is desirable to minimize the number of estimations. Secondly, for best results it is necessary to estimate the numeric derivative with high precision, for which the recommended procedure is to use analytic derivatives whenever possible.
We substitute the optimization using numerical derivatives with analytic derivatives of the cost function. For this, we first calculate the derivative with respect to the projected particle positions ui and vi and the particle intensity Ii of particle i on camera j:
$$\frac{{\text{d}}}{{{\text{d}}u_{i} }}{\text{cost}}_{j} = 2I_{i} \mathop \sum \limits_{uv} {\text{res}}_{uv} \frac{{\delta {\text{shape}}_{i} }}{\delta u}\left( {u - u_{i} ,v - v_{i} } \right)$$
(6)
$$\frac{{\text{d}}}{{{\text{d}}v_{i} }}{\text{cost}}_{j} = 2I_{i} \mathop \sum \limits_{uv} {\text{res}}_{uv} \frac{{\delta {\text{shape}}_{i} }}{\delta v}\left( {u - u_{i} ,v - v_{i} } \right)$$
(7)
$$\frac{{\text{d}}}{{{\text{d}}I_{i} }}{\text{cost}}_{j} = - 2\mathop \sum \limits_{uv} {\text{res}}_{uv} {\text{shape}}_{i} \left( {u - u_{i} ,v - v_{i} } \right)$$
(8)
IPR does not optimize the projected particle positions \(u_{i}\) and \(v_{i}\), but the 3D position \(x_{i} ,y_{i} ,z_{i}\) and the intensities \(I_{i}\), \(u_{i}\) and \(v_{i}\) are functions of the 3D position of particle i. The exact dependency is defined by the camera-model that maps the 3D positions onto pixel coordinates. Given the Jacobian of the camera function, the chain-rule can be applied to Eqs. (6) and (7), yielding
$$\begin{gathered} \frac{\delta }{{\delta x_{i} }}{\text{cost}} = \frac{{\delta u_{i} }}{{\delta x_{i} }}\frac{{\delta {\text{cost}}}}{{\delta u_{i} }} + \frac{\delta v}{{\delta x_{i} }}\frac{{\delta {\text{cost}}}}{{\delta v_{i} }} \hfill \\ \frac{\delta }{{\delta y_{i} }}{\text{cost}} = \frac{{\delta u_{i} }}{{\delta y_{i} }}\frac{{\delta {\text{cost}}}}{{\delta u_{i} }} + \frac{\delta v}{{\delta y_{i} }}\frac{{\delta {\text{cost}}}}{{\delta v_{i} }} \hfill \\ \frac{\delta }{{\delta z_{i} }}{\text{cost}} = \frac{{\delta u_{i} }}{{\delta z_{i} }}\frac{{\delta {\text{cost}}}}{{\delta u_{i} }} + \frac{\delta v}{{\delta z_{i} }}\frac{{\delta {\text{cost}}}}{{\delta v_{i} }} \hfill \\ \end{gathered}$$
(9)
Using the same schemes, also the second derivate of the cost function with respect to the spatial coordinates can be determined analytically. It is evident that all derivatives and the local cost can be simultaneously estimated while only iterating once over the immediate proximity of the projected particle position.
The classical approach to use these cost derivatives to fit the interim solution to the measurement is to apply common fitting algorithms, like LBFGS, to solve the problem for all particles simultaneously. Unfortunately, standard libraries often do not allow adding and removing of particles at runtime (we expect it to be possible to adapt the LBFGS to these needs, however that is a work in progress). Furthermore, special features, like ignoring a selection of cameras for each particle to actively suppress ghost particles are not available. Due to these reasons, localized methods that treat each particle individually have shown a better reconstruction performance. Still, global solvers have a potential of being effective tools for the position optimization, once specialized versions are developed. A first useful development and its application are presented in Sects. 3.6 and 5.
Using the derivatives introduced above, several optimization procedures working on individual particles have been tested, including ones that query the second derivative to estimate the optimal step size, thereby requiring ideally only one sampling of the cost function.
However, only the technique that proved to be the most stable is presented here, which is the steepest-descent method, requiring only the first derivative.

3.3.1 Steepest-descent method (SDM)

The steepest descent method (Fletcher and Powell 1963) offers the easiest way to use the gradient of the cost function for optimization of the particle parameters. Let \(\vec{x}_{i}^{T} = \left( {x_{i} ,y_{i} ,z_{i} ,I_{i} } \right)\) be the state vector of particle i. Using Eq. (9), we determine the gradient \(\vec{\nabla }_{i} {\text{cost}}\left( {\vec{x}_{i}^{T} } \right)\) of the cost function with respect to \(x_{i} ,y_{i} ,z_{i}\) and \(I_{i}\) on each camera. First, we determine the optimal intensity of particle i on each camera (as discussed in Sect. 3.3.2) and sum up all derivatives. This summed up gradient gives us the direction, along which we have to move our particle in order to reduce the cost: \(\vec{\nabla }_{c} = \vec{\nabla }{\text{cost}}\left( {\vec{x}_{i}^{T} } \right)/\left| {\vec{\nabla }{\text{cost}}\left( {\vec{x}_{i}^{T} } \right)} \right|\). In this implementation, we apply a simple line search along this line. With a predefined maximum step width \(d_{\max }\), we query the cost function at three locations
$$1.0*d_{\max } * \vec{\nabla }_{c} , 0.1*d_{\max } * \vec{\nabla }_{c} , 0.01*d_{\max } * \vec{\nabla }_{c} .$$
The particle is simply moved to the location with the lowest value of cost. A typical value for \(d_{\max }\) is 0.4 px in the first IPR iterations. In later iterations, \(d_{\max }\) can be reduced for the sake of accuracy. Several other approaches have been tried (fitting a function and identifying the minimum, using more queries, applying an adaptive step size); however, the described method showed the best compromise between computational efficiency (only three queries of the cost function) and stability (low tendency of oscillations).

3.3.2 Intensity update

Along with the optimization of the position, the intensity of each particle is updated to best fit the current residual images. Wieneke (2013) applied a multiplicative corrections scheme, working with the ratio of the particle-augmented residual image to the calibrated particle shape.
In the current implementation, an optimal intensity update is approximated using the cost function. The (local) minimum of cost with respect to \(I_{i}\) for a single particle i in camera j can be determined using a one-step Newton method. To this end, in addition to the first derivative \(\partial_{{I_{i} }} {\text{cost}}_{j}\) (8), the second derivative
$$\partial_{{I_{i}^{2} }}^{2} {\text{cost}}_{j} = 2\mathop \sum \limits_{uv} {\text{shape}}_{i}^{2} \left( {u - u_{i} ,v - v_{i} } \right)$$
(10)
is determined, which can, for sufficiently wide shapes, be approximated as \(\partial_{{I_{i}^{2} }}^{2} {\text{cost}}_{j} = 2 \left\| {{\text{shape}}_{i} } \right\|^{2}\). The new intensity can then be expressed as
$$I_{{i,j, {\text{new}}}} = I_{i,j} - \frac{{\partial_{{I_{i} }} {\text{cost}}_{j} }}{{\partial_{{I_{i}^{2} }}^{2} {\text{cost}}_{j} }} \approx I_{i,j} - \frac{{\partial_{{I_{i} }} {\text{cost}}_{j} }}{{2\left\| {{\text{shape}}_{i} } \right\|^{2} }}$$
(11)
The total intensity update \(I_{{i, {\text{new}}}}\) is then calculated as the average of all relevant cameras j. In order to avoid oscillations of the intensity, a dampening factor \(\gamma\) is introduced and applied as follows: \(I_{{i, {\text{new}}}} = \gamma *I_{{i, {\text{new}}}} + \left( {1 - \gamma } \right)* I_{i}\). A value of \(\gamma = 0.5\) has generally shown good results.
In both Wienekes (2013) approach and earlier in-house DLR implementations, the intensity update was calculated as an average of all cameras, in which the current particle is visible. In the current work, we show that by ignoring certain cameras for the update, a much more effective suppression of ghost particles can be achieved. After calculation of the new intensities for all cameras according to Eq. (11), the results are sorted and the \(N_{{{\text{C}}i}}\) cameras that show the highest intensities are not used for the determination of the average. This results in a slight general underestimation of the particle intensities. However, the impact on the intensity of ghost particles is much higher, since their intensity if often supported by bright peaks on single or few cameras. If these cameras are removed from the intensity calculation, the intensity of the ghost particle drops significantly and the particle is removed during the filtering step using a simple threshold. Another likely scenario is that the brightest camera might show an elevated intensity due to an overlapping situation. In this case, this camera would bias the intensity to higher values, which might result in an aggravated identification of the second peak, with which the image of the current particle is overlapping.
For every outer IPR iteration (starting with peak detection and triangulation), \(N_{{{\text{shake}}}}\) iterations of shaking are performed, each of them optionally followed by a filtering step (see below)—the combination of shaking and filtering we call inner iterations. At the beginning of every inner iteration, the residual images \({\text{res}}_{uv}\) are determined anew to reflect the changes in position and intensity from the last iteration. The characteristics of the SDM are compared to the original shake approach by Wieneke in Sect. 4.9.

3.4 Filtering

Each inner iteration of shaking can be followed by a filtering step, where presumable ghost particles are removed from the current interim solution. The most important filtering procedure is removing all particles below a certain intensity threshold \(I_{\min }\). This measure is also proposed in the original IPR paper. Ghost particles usually only poorly overlap with measured peaks or draw their intensity mostly from one or two cameras only. Furthermore, the true particles drain intensity from the ghost particles during the optimization step. For that reason, ghost particles tend to have a lower intensity than true particles, increasingly so with increasing number of iterations (see Fig. 3). Opposed to Wieneke (2013), we do not define \(I_{\min }\) as a percentage of the average intensity, but as a fixed value, which proved more stable in most situations.
Other filtering methods are additionally applied. Particles that are found in close proximity in 3D space (below a threshold of typically 1 pixel) are merged, as it can be assumed that they represent the same true particle. The darker particle is deleted and its intensity is added to the brighter particle. Also, particles need to be visible within at least two cameras. As soon as they leave this volume, they are deleted.

3.5 Rendering of residual images

After triangulation and position optimization with filtering, the residual images \({\text{res}}_{uv}\), as introduced in Sect. 2.3, are created. These images are the basis for the following outer IPR iteration, as the peak detection will now be carried out on \({\text{res}}_{uv}\). In order to more effectively erase known particles from the images, a factor \(f_{{{\text{Pt}}}} \ge 1\) can be used on the particle intensity while creating the projected images prior to the triangulation of new particles on the residual images.
Rendering the residual images is not only used in step 1 before the triangulation procedure, but also internally in step 4 within every inner iteration of the shaking process (here \(f_{{{\text{Pt}}}} = 1\) is used).

3.6 Position optimization using ensemble-based image matching (‘global shake’)

In Sect. 3.3, we showed how to efficiently determine the derivatives of the cost function with respect to the 4N parameters \(x_{i} ,y_{i} ,z_{i} ,I_{i}\) for all N particles (by estimating the derivatives for each particle on each camera and looping over its immediate proximity in the residual image).
As an alternative approach, the cost function together with this 4N-dimensional gradient vector allows us to apply common fitting algorithms, like the LBFGS algorithm for a global optimization of the problem. This way, all particle positions will be optimized within the same fitting step, rather than being treated consecutively.
As indicated in Sect. 3.3, these algorithms currently cannot compete with approaches that optimize each particle individually. Another feature of this approach, however, proves to be very useful. Namely, during the estimation of the derivatives with respect to the particle parameters it is possible to also estimate the derivatives of the cost function with respect to internal camera parameters with little additional cost. To take advantage of this, we introduced an additional global 2D shift on each camera to model small shifts induced due to vibrations during the measurement.
$$\begin{aligned} u_{i} & = u_{i} \left( {x_{i} ,y_{i} ,z_{i} } \right) + {\text{shift}}_{u} \\ v_{i} & = v_{i} \left( {x_{i} ,y_{i} ,z_{i} } \right) + {\text{shift}}_{v} \\ \end{aligned}$$
(12)
These two extra parameters per camera can simply be added to the gradient vector, resulting in a total of \(4N + 2N_{{{\text{cams}}}}\) number of degrees of freedom to be optimized. This way, the global shake procedure can be used to correct for vibrations in an experimental setup or for small initial deviations from the camera calibration, which might occur between subsequent realizations of an experiment.
The capability of the global shake, which can be regarded as a bundle adjustment method, to account for decalibrations is demonstrated in chapter 6.

4 Application of advanced IPR to synthetic data

The IPR algorithm, including all introduced extensions, has been implemented as a C++ library that allows easy scripting via a Python interface. In order to quantify the reconstruction capabilities of extended IPR, it is applied to several synthetic test cases.
Section 4.1 describes the creation of synthetic data and specifies the general parameters of the IPR used for the reconstructions. Section 4.2 gives results for perfect imaging conditions, Sect. 4.3 quantifies the effects of noise and other imaging artifacts, Sect. 4.5 describes the influence of the number of IPR iterations, while Sect. 4.4 does the same for the number of shake and filtering iterations. Section 4.6 demonstrates the effect of permutating the camera order, Sect. 4.7 quantifies the influence of the intensity update and Sect. 4.8 covers the choice of the allowed triangulation error. Finally, Sect. 4.9 compares two approaches of calculating the derivative of the cost function.

4.1 Synthetic test-cases

The performance of the extended IPR processing method when applied to synthetic data is analyzed below. To this end, a synthetic test case was created, which replicates the one used in Wieneke (2013): Four virtual pin-hole cameras of 1300 × 1300 pixels each are oriented in a pyramidal viewing configuration and image randomly created particles in a domain of 50 × 50 × 16 length units. All cameras view the full volume. The particle shape is Gaussian with a diameter of 2 pixels (\(I_{a,r} = 1.0, a = 2\), \(b = 0\), \(c = 2\), see Sect. 2.2). No variations of the OTF were included here. The pixel values are always integers.
Three different cases were considered: perfect imaging conditions (case I), moderate noise (case II) and heavier noise (case III). Particle image densities ranging from 0.02 ppp to 0.16 ppp were investigated.
Unless otherwise noted, the following parameters were used for the IPR reconstructions of all cases: 20 outer IPR iterations \(N_{{{\text{IPR}}}}\) were conducted. For the sake of clarity and visualization of the results, this value remained constant, even if convergence is usually reached earlier. The allowed triangulation error ε starts at ε(1) = 0.4 px and is linearly increased to ε(NIPR) = 0.8 px (see Sect. 3.2). Due to the low initial values of ε, the triangulation can immediately start with using a reduced set of cameras (only \(N_{{\text{c}}} - 1 = 3\) cameras are required to have a peak in order to register the particle). Each outer IPR iteration (as depicted in Fig. 1) used \(N_{{{\text{shake}}}}\) = 10 inner iterations of shaking. From those, particle filtering (intensity, duplicates, FOV) is performed for the last seven iterations. The number of brightest cameras left out in the intensity update (see Sect. 3.3) is set to \(N_{{{\text{C}}i}}\) = 2. The camera order is permutated 4 times (1–2–3–4, 4–1–2–3, 3–4–1–2, 2–3–4–1) and a triangulation is carried out for each permutation (see Sect. 3.2)). The maximum step width is initially set to \(d_{\max } = 0.4\;{\text{px}}\). For iterations, 15–17 \(d_{\max }\) is reduced to 0.1 px; for iterations, 18–20 a further reduction to \(d_{\max } = 0.02\;{\text{px}}\) is applied. The factor applied to the reprojected image was \(f_{{{\text{Pt}}}}\) = 1.0.
The created virtual images are reconstructed and the results compared to the original particle distribution.

4.2 Perfect imaging conditions

First the performance of the algorithm will be assessed on images free of any defects (case I). The particles have a constant intensity of 1000 counts in the volume, which is distributed over all pixels of the particle image in each camera (leading to typical peaks of 300–400 counts, see Fig. 4). All pixels of the generated measurement are rounded to the nearest integer. No image pre-processing is applied. The main IPR parameters are indicated in the section above. Specific parameters for this case are: the intensity threshold for identifying peaks on the (residual) images is \(P_{\min }\) = 50 counts, the intensity threshold in the volume, below which a particle is discarded as a ghost particle, is \(I_{\min }\) = 200 counts (i.e., 20% of the average particle intensity).
Figure 5 shows the convergence behavior of IPR using the steepest-decent method (SDM) for varying particle image densities, when applied to the described noise-free synthetic dataset. Shown is the ratio of correctly identified particles \(R_{{\text{f}}}\) (reconstructed within a search radius of 1 pixel around the true particles; normalized with the number of true particles and abbreviated as ‘founds’), the ratio of ghost particles \(R_{{\text{g}}}\) (reconstructed particles, for which no true particle could be found in a radius of 1 pixel; also normalized with the number of true particles and abbreviated as ‘ghosts’) and the average accuracy of the particle positioning of the correctly found particles \({\Delta }_{{\text{f}}}\) (calculated as the average distance of the reprojected points to the original peak position).
For lower particle image densities (NI ≤ 0.08 ppp), the method converges to a stable value of \(R_{{\text{f}}}\) \(\approx\) 1 in less than 5 iterations. At NI = 0.14 ppp, the reconstruction converges after 15 iterations. Even for this high seeding density, only 8 particles out of 136,477 are not found. For the highest seeding density, convergence is foreseeable, but not achieved within the 20 iterations used (see also Sect. 4.4).
For low values of \(N_{{\text{I}}}\), the number of ghost particles quickly diminishes after a few iterations. At higher particle image densities though, a rise in ghost numbers occurs within the first five iterations, which can go up to \(R_{{\text{g}}} = 1\). This can be understood, as in these cases many particles could not be identified in this early state and the uncertainty of the found ones is high. Therefore, the residual images will show relevant intensities, even in spots where the particles were already triangulated. These residual intensities will be picked up by the peak finder and new false particles will arise.
However, with the identification of more and more true particles, the ghost level is gradually reduced. As soon as all real particles are found, also the ghosts have disappeared. Indeed, only for NI = 0.16 ppp ghosts are still present after 20 iterations (as this case is not yet converged). For all other cases, the number of detected ghosts is 0.
In order to give a better insight into the workings within a single IPR iteration, Fig. 6 shows the development of ghost and found true particles at NI = 0.14 ppp, resolved for the triangulation and all shake iterations (blue and orange lines). Additionally, the end-result of each iteration is connected (black and red lines, which then correspond to the results from Fig. 5). Looking at the first few outer iterations, it becomes obvious that initially high numbers of weak ghost particles are triangulated. These remain in the particle cloud for the first three shake iterations (the short flat parts of the curve at the beginning of each inner iteration). As soon as the filtering of the particles is performed, the majority of these is deleted, as their intensity has been further reduced by the shaking. From there, the number slowly decreases, until the next triangulation drives the number up again. At the same time, the number of found true particles is barely reduced by the shaking. With every outer iteration, more true particles are found, reducing residual peaks, which in turn gradually minimizes the number of triangulated (ghost-) particles, until all particles are identified and zero ghost particles remain.
The position accuracy is very much dependent on the presence of ghost particles and the detection of true particles. As soon as these two have converged, high levels of accuracy are reached. The reduction of the maximum step width \(d_{\max }\) at iteration 15 is very visible, e.g., for the 0.12 ppp case, the position accuracy dips from a plateau at 0.04–0.007 px. The second reduction of \(d_{\max }\) at iteration 18 yields a further reduction of \({\Delta }_{{\text{f}}}\) to 0.005 px at this seeding density. At 0.16 ppp, it becomes evident that a reduction of \(d_{\max }\) leads to a reduction in the speed of convergence, when looking at \(R_{{\text{f}}}\) and \(R_{{\text{g}}}\). Therefore, this measure should only be applied after convergence is reached.
Compared to the results of Wieneke (2013), which were gained using a comparable synthetic experiment, the range of particle image densities has been greatly extended. Wieneke reported a usable particle image density of NI = 0.05 ppp and a sharp decrease of reconstruction quality when going beyond this value using 16 outer and 6 inner iterations. Here we find that images at NI = 0.14 ppp produce results completely void of ghost particles and very few missing real particles (far below Wienekes values at 0.05 ppp). The position accuracy of the advanced IPR is much higher, with values Δf ≤ 0.01 px for all converged cases. These quite profound advances in reconstruction quality result from the combination of the several enhancements to the IPR algorithm introduced above, whose effect is quantified in Sects. 4.6 to 4.9.
The next section will discuss the influence of camera noise and particle intensity variations on the reconstruction results.

4.3 Effect of image noise and intensity variations

Reconstructions of noise-free synthetic images provide a mean to characterize the theoretic performance of algorithms; however, images gathered within experiments will show numerous deviations from perfect conditions. Camera noise can play an important role, as the light budget is typically limited for volumetric investigations, leading to low signal-to-noise ratios. Seeding particles are typically not perfectly monodisperse, leading to—sometimes broad—distributions of peak intensities. A single particle might even show variations in imaged intensity for the different camera projections due to effects of scattering angles (after Mie theory) (Manovski et al. 2021) or uneven particle shape. In order to characterize the effects of some of these experimental shortcomings on the reconstruction quality, synthetic test cases with two different levels of simulated camera noise and a distribution of particle size (i.e., intensity) were created (cases II and III).
For case II (moderate noise), the particles have a mean intensity of 1000 counts in the volume, with a Gaussian distribution with \(\sigma^{2} = 250\) counts, leading to image peaks of different brightness in the range of ~ 200–500 counts. A constant Gaussian noise floor with a mean of 40 counts and rms width of 25 counts is added to the image. To further simulate photon shot noise, a Poisson process is applied for every pixel, thereby adding noise proportional to the intensity of individual pixels. Case III (heavier noise) applies a Gaussian distribution with \(\sigma^{2} = 500\) counts to the particle intensities, leading to image peaks in a broad range of ~ 40–800 counts. A constant Gaussian noise floor with a mean of 80 counts and rms width of 50 counts is added to the image. Again, pixel shot noise is additionally applied. Case II was chosen to represent an experimental case with very good imaging conditions, e.g., recordings of Helium-filled soap bubbles illuminated by LED light (see e.g., Huhn et al. 2017). Case III represents tougher experimental conditions, like laser-illuminated droplets in air (e.g., Manovski et al. 2021) or particles in water (e.g., Neeteson et al. 2016; Schröder et al. 2020). We are aware that other effects (like speckles and scattering laws) are as well relevant in this context; however, these topics are beyond the scope of this work. Figure 7 shows exemplary excerpts from both cases at low- and high-particle image densities.
Image pre-processing consisted of subtracting a constant of 40 counts (case II) and 80 counts (case III), which roughly corresponds to subtracting the minimum or average image, as often performed for experimental cases. The main IPR parameters are the same as indicated in Sect. 4.1. Specific parameter for this case: the intensity threshold for identifying peaks on the (residual) images had to be increased to \(P_{\min }\) = 70 counts (case II) and \(P_{\min }\) = 90 counts (case III), in order to limit the influence of noise peaks. In order to more effectively remove false particles, triangulated from the remaining noise peaks, the intensity threshold in the volume, below which a particle is discarded as a ghost particle, is increased to \(I_{\min }\) = 350 counts (case II) and \(I_{\min }\) = 450 counts (case III), respectively.
Figure 8 shows the convergence of the advanced IPR processing for varying particle image densities, when applied to the noisy synthetic datasets. For the lower noise level (left column), it can be seen that the general convergence is slowed down compared to the noise-free case. However, all cases below NI = 0.14 ppp converge to \(R_{{\text{f}}} \approx 0.99\) within at most 13 iterations. For NI = 0.14 ppp, a similar convergence would likely be achieved when applying more than 20 iterations. For all but the highest particle image densities, the number of ghost particles is virtually zero, once convergence is reached. The position accuracy shows a constant level, determined by the noise level, which is slightly offset by increasing effects of particle overlap. For the converged cases, values between Δf = 0.09 px and Δf = 0.12 px are attained.
For the higher noise levels and intensity variation of case III, the ratio of found particles drops to values of \(R_{{\text{f}}} \approx\) 0.8, which are achieved up to NI = 0.1 ppp. For higher particle image densities, lower values are seen; still, approx. 66% of the real particles are correctly identified at NI = 0.14 ppp for this relatively high noise level. In general, the convergence is much slower, as the peak detection accuracy suffers from the image noise, requiring higher values of \(\varepsilon\) for a successful triangulation, which are only attained within later IPR iterations. Despite a high number of noise peaks that reach above the peak detection threshold, the occurrence of ghost particles is limited (\(R_{{\text{f}}} < 0.03\) for NI < 0.14 ppp). The position accuracy is more affected by the stronger noise, with most seeding densities ranging between Δf = 0.14 px and Δf = 0.18 px. For both noisy cases, the close proximity of Δf for a wide range of particle image densities points to the image noise as the dominant source for the position uncertainty. Consequently, the reduction of the maximum shake width to 0.1 and 0.02 px in the last IPR iterations is of no consequence here. It is evident that especially for case III not all particles could be reconstructed, which is likely caused by the large range of particle intensities, leading to peaks that are within the noise level and below the peak detection threshold for most or all cameras.
Figure 9 (top)shows the inner-iteration-resolved convergence of case III at NI = 0.12 ppp. Opposed to the processing of case I (see Fig. 6), the number of found true particles is now affected by the filtering process. Even at the later iterations, a certain number of true particles is deleted by the intensity threshold filter, which is likely caused by the presence of low-intensity particles, which are bright enough to be picked up by the peak search, but fall below the relatively high volume intensity threshold of \(I_{\min }\) = 450 counts. The number of ghost particles fluctuates strongly within each IPR iteration, illustrating that the noise peaks are indeed strong enough to be detected by the peak finder, but do not form coherent 3D particles, which are easily removed by the intensity filter. The increase of triangulated ghost particles with convergence can be attributed to the increasing allowed triangulation error. The lower absolute number of ghost particles at the beginning in relation to the noise-free case I can be explained by the increased peak detection threshold (90 counts vs. 50 counts).
The combination of high noise levels and a significant spread of particle intensities inhibits a reconstruction of all particles for case III. The majority of the missing real particles are of so low intensity that they fall below the specified intensity threshold. Figure 9 (bottom) documents this by plotting the ratio of found particles \(R_{{\text{f}}}^{*}\), this time normalized by the number of true particles that have been assigned an intensity above \(I_{\min }\) (450 counts). It can be seen that approx. 95% of these particles are correctly identified up to NI = 0.1 ppp. The remaining undetected particles are missed due to the heavy noise on the particle images, inhibiting a correct detection of the particle peak.
If performing STB evaluations of an experiment with multiple time-steps (ideally fully time-resolved), a full reconstruction is still possible. Due to the predictive step within STB, particles whose trajectory has once been found are typically not lost before leaving the measurement volume. Therefore, it is sufficient to identify a particle only once in four consecutive time-steps within its residence time, thereby greatly enhancing the chances of a successful detection. Two- (Jahn et al. 2017; Lasinger et al. 2019) and multi-pulse particle tracking approaches (Novara et al. 2019a), using, e.g., particle space correlation (PSC) (Novara et al. 2016b) as a local flow predictor, could apply iteratively reduced thresholds in order to track as many particles as possible.
Depending on the type of experiment, several other imaging defects might occur (especially when using laser illumination) that are not integrated into the current simulation. For such experimental data, even lower thresholds might have to be chosen, resulting in high amounts of ghost particles in single time-steps. Again, the dimension of time has to be used to separate them from the true particles.
A useful tool is a histogram of the number of detected peaks in relation to the peak finder threshold, which can help finding a suited threshold to separate noise from signal. Another measure that would be of benefit in such cases is to develop/apply a peak detection method that is relies less on the intensity as the sole detection measure. If, e.g., searching for a specific particle image shape instead of a bright peak, the detection would be less susceptible to noise.

4.4 Number of IPR iterations

The effects of changing the number of outer (IPR) iterations \(N_{{{\text{IPR}}}}\) is investigated by comparing results of reconstructions using \(N_{{{\text{IPR}}}} = 5\), \(N_{{{\text{IPR}}}} = 20\) and \(N_{{{\text{IPR}}}} = 100\). The linear increase of the triangulation radius from \(\varepsilon = 0.4\) to \(\varepsilon = 0.8\) is retained, only the spacing changes according to the number of iterations.
All other parameters remain unchanged compared to Sects. 4.2 and 4.3. Figure 10 (top) shows the convergence of these reconstructions for case I at three particle image densities. It can be seen that for the two lower values (\(N_{{{\text{IPR}}}} = 0.04{ }\;{\text{ppp}}\) and \(N_{{{\text{IPR}}}} = 0.1\;{\text{ppp}}\)) the differences are minor, however the reconstruction using \(N_{{{\text{IPR}}}} = 5\) actually converges the fastest, due to the quicker increase in \(\varepsilon\). At very high-particle image densities (\(N_{{\text{I}}} = 0.16\;{\text{ppp}}\)), the differences are profound: For \(N_{{{\text{IPR}}}} = 5\) only approx. 60% of the true particles can be identified, using \(N_{{{\text{IPR}}}} = 20\) increases this value to approx. 85%. When applying 100 iterations, the reconstruction converges to a full solution after 40 iterations.
A slightly different trend can be seen when using the noisy images from case III [Fig. 10 (bottom)]. At low-particle image densities, all reconstructions converge to \(R_{g} \approx 0.82\); however, the less iterations are used, the faster the convergence due to the quick increase of \(\varepsilon\). At medium values of NI however, this reconstruction using \(N_{{{\text{IPR}}}} = 5\) is not able to converge to the same value of Rg as the other two and at \(N_{{\text{I}}} = 0.14\) the differences between the three reconstructions are again profound (\(R_{{\text{g}}} \approx 0.42\) at \(N_{{{\text{IPR}}}} = 5\), \(R_{{\text{g}}} \approx 0.66\) at \(N_{{{\text{IPR}}}} = 20\) and \(R_{{\text{g}}} \approx 0.74\) at \(N_{{{\text{IPR}}}} = 100\)).
To summarize, increasing the number of IPR iterations is beneficial at very high particle image densities. For typical experimental conditions, using low values for \(N_{{{\text{IPR}}}}\) (between three and ten) usually works equally well and further iterations might even not improve the values due to usually more complex experimental imaging conditions than those mimicked in the present synthetic test cases.

4.5 Number of shake iterations

In order to quantify the effect of the number of inner (shake) iterations \(I_{i}\) on the results, the created images have been evaluated by IPR using \(N_{{{\text{shake}}}} = 5\), \(N_{{{\text{shake}}}} = 10\) and \(N_{{{\text{shake}}}} = 20\) shake iterations within each outer iteration. Filtering was always performed starting from the fourth inner iteration. All other parameters remain unchanged compared to Sects. 4.2 and 4.3.
Figure 11 (top) shows the results for three different particle image densities of case II. It can be seen that only for the highest particle image densities noticeable differences can be seen. At NI = 0.06 ppp, all cases perform equally. Going to NI = 0.1 ppp, the reconstructions using more iterations show a slightly increased convergence. At NI = 0.14 ppp, the reconstruction using \(N_{{{\text{shake}}}} = 5\) does not converge, opposed to the other two cases. The reason for the instability of this case is further analyzed in. When comparing the inner-iteration-resolved convergence (Fig. 11, bottom) to the case using \(N_{{{\text{shake}}}} = 10\) (Fig. 11, middle), it can be seen how at the first iterations the additional iterations in the latter case lead to an ongoing reduction in ghost particles within every outer iteration. This, combined with a better positioning and intensity reconstruction of the particles, leads to a less populated residual image, which in turn leads to a reduced ghost particle triangulation in the next iteration. For the case using \(N_{{{\text{shake}}}} = 5\) on the other hand, the number of ghost particles triangulated within every outer iteration is increasing more and more as the triangulation error is growing with the iteration number. The high number of total particles leads to a reduction of average particle intensity, explaining the decrease of reconstructed true particles in the last iterations, as more and more true particles are deleted by the intensity threshold. Using a reprojection factor \(f_{{{\text{Pt}}}} > 1\) might stabilize the process, at the expense of some unidentified true particles.
All in all, it can be stated that for particle image densities that are typically encountered in experimental conditions, using low numbers of shake iterations in order to limit processing time comes with little penalty.

4.6 Camera order while triangulating

Conducting several triangulations with permutated camera order, as introduced in Sect. 3.2.1, has a strong influence on the convergence behavior of IPR. Figure 12 compares results from case II, which were reconstructed once using four successive triangulations with camera orders (1–2–3–4, 4–1–2–3, 3–4–1–2, 2–3–4–1) and once using a single triangulation with order (1–2–3–4). All other parameters remain unchanged compared to Sect.  4.3.
It is apparent that the permutation approach greatly enhanced convergence. For both particle image densities, this case yields higher numbers of true particles, starting from the first iteration. From there, the reconstructions converge within 5 (NI = 0.06 ppp) and 14 (NI = 0.12 ppp) iterations, while the standard approach fails to fully converge within twenty iterations in both cases. At the beginning, the permutated cases also show higher ghost levels, which can be understood, given that more triangulations are involved. However, the ghost particles are quickly suppressed as more and more true particles are found. For the standard case at NI = 0.12 ppp, the ghost level rises within the first five iterations and is only slowly reduced in the following.
For cases I and III, similar results were attained. The slight increase in computational effort by conducting multiple triangulations is easily compensated by the much faster convergence.

4.7 Intensity update variations

As described in Sect. 3.2.2, all reconstructions presented until now use an intensity update, which ignores the two brightest camera (NCi = 2), i.e., uses the average of the two (locally) weakest cameras to compute the updated intensity. In this section, we analyze the impact of changing NCi. To this end, reconstructions of all test cases and particle image densities were conducted, varying NCi from 0 to 3.
Figure 13 shows selected results for case I and III. For noise-free images at moderate seeding density (NI = 0.08 ppp), all versions start at the same fraction of found true particles; however, the more cameras are ignored, the better ghost particles can be suppressed. In case all cameras are used for the update (NCi = 0), the ghost fraction is too high to be quickly corrected for and convergence is delayed. At lower particle image densities, all versions perform approx. the same. In these noise-free conditions, the intensity information from peaks on a single camera is sufficient for an accurate determination, as can be seen when looking at the result with higher particle image density (NI = 0.14 ppp). Again, the fraction of found true particles starts at the same value for all values of NCi and the ghost levels vary. The case with NCi = 3 converges within 12 iterations. Including two cameras in the intensity update (NCi = 2) leads to a slower convergence, which is reached after 15 iterations. For the other cases, convergence is not attained; at NCi = 0, only approx. 60% of the true particles can be identified. The reason for these differences is the strong difference in ghost suppression capability. A ghost particle is typically supported by strong peaks on one or two cameras only; the other cameras show low-intensity residuals or only partial overlap with a particle image. Therefore, the chances of ghost particles retaining a high intensity are greatly reduced when using NCi = 2 or NCi = 3.
The situation changes to a certain degree when noise and other imaging defects are present. The result of case III at NI = 0.12 ppp shows that here the reconstruction using NCi = 3 performs worse than NCi = 2 and is even becoming slightly unstable (Rg going down in the last iterations), such that the reconstruction using NCi = 1 is reaching the same level, despite converging slower initially. Again, including all cameras in the intensity update performs worst. The reason for the worse performance of NCi = 3 in this case is likely a stronger deletion of true particles by the intensity filtering due to the presence of weak particles and the influence of the noise on the intensity registered in each camera.
Due to the high relevance of noise in real experimental data, NCi = 2 was chosen as the standard value for a four-camera setup, as regarded here. When using more cameras, a higher value of NCi might also be chosen.

4.8 Choice of triangulation error ε

As introduced in Sect. 3.2.2, all previously presented IPR reconstructions apply a triangulation radius, linearly increasing within the NIPR outer iterations: \(\varepsilon \left( i \right) = \varepsilon \left( 0 \right) + i/\left( {N_{{{\text{IPR}}}} - 1} \right)*\Delta \varepsilon\), with \(\Delta \varepsilon = \varepsilon \left( {N_{{{\text{IPR}}}} } \right) - \varepsilon \left( 0 \right)\) and \(\varepsilon \left( 0 \right) = 0.4\;{\text{px}},\; \varepsilon \left( {N_{{{\text{IPR}}}} } \right) = 0.8\;{\text{px}}\). In this section, the differences to using a fixed value of ε, as applied, e.g., in Wieneke (2013), are discussed, as well as other functions of increasing ε.
Figure 14 compares the results of reconstructions using fixed values of \(\varepsilon = 0.6\;{\text{px}}\) and \(\varepsilon = 0.8\;{\text{px}}\) to the linear ramp approach for two particle image densities from case III. At low-particle image density (NI = 0.04 ppp), the results do not differ much, with the reconstruction using \(\varepsilon = 0.6\;{\text{px}}\) converging quickest, followed by the ramp approach. Using \(\varepsilon = 0.8\;{\text{px}}\) results in a slightly slower convergence, likely due to an increased appearance of ghost particles, caused by noise peaks and the relatively high allowed triangulation error. Eventually all approaches converge to a comparable level of approx. 82% of found particles.
At high-particle image density (NI = 0.14 ppp), the results are different. Here, the ramp approach is consistently yielding the best results. Using a fixed \(\varepsilon = 0.6\;{\text{px}}\) shows a slower convergence and a lower end value, while the fixed \(\varepsilon = 0.8\;{\text{px}}\) case completely fails to yield a significant number of true particles. To better understand these results, Fig. 15 shows the inner-iteration-resolved convergence of the two fixed cases at NI = 0.12 ppp, which can be directly compared to the linear-ramp result, given above in Fig. 9. It is obvious that both cases using a fixed value of ε exhibit a high level of triangulated ghost particles at the beginning of the reconstruction (\(R_{{\text{g}}} \approx 2.5\) at \(\varepsilon = 0.6\;{\text{px}}\) and \(R_{{\text{g}}} \approx 5.0\) at \(\varepsilon = 0.8\;{\text{px}}\)), which can be avoided using a low value of ε, as the ramp approach does (here \(R_{{\text{g}}}\) always remains below 0.8). At a fixed value of \(\varepsilon = 0.6\;{\text{px}}\), the reconstruction is able to recover from the initially high ghost fraction, as enough true particles are above the intensity threshold to allow convergence; at \(\varepsilon = 0.8\;{\text{px}}\) however, the intensity available from the images is distributed over so many triangulated particles that nearly all of them fall below the threshold of \(I_{\min }\) = 450 counts. From the initially triangulated 55% of true particles, less than 3% remain after filtering. The ghost particles are nearly completely deleted. Therefore, the residual image used for the next triangulation iteration represents an almost unaltered original image and the reconstruction shows no signs of convergence over all 20 iterations.
Of course, these results will vary with the chosen parameters. If the triangulations of the first IPR iterations were conducted using the full set of cameras (i.e., requiring a peak on all 4 cameras, instead of just 3), the number of triangulated particles would be reduced; however, this would require more parameters and proved to be less flexible.
To summarize, using a fixed triangulation error can yield competitive results, especially at lower seeding densities and low-noise conditions, as long as it is chosen small enough. However, this requires a very accurate calibration that is not always attainable in real-life experiments. Therefore, we recommend using a triangulation error that is increasing with IPR iteration, which intrinsically provides a way to limit the occurrence of ghost particles in the first iterations, where the residual image is still very populated.
Other approaches than a linear increase of epsilon with the outer iteration were also explored: a quadratic function \((\varepsilon \left( i \right) = \varepsilon \left( 0 \right) + \left( {i/\left( {N_{{{\text{IPR}}}} - 1} \right)} \right)^{2} *\Delta \varepsilon\)) and a square-root function (\(\varepsilon \left( i \right) = \varepsilon \left( 0 \right) + \sqrt {i/\left( {N_{{{\text{IPR}}}} - 1} \right)} {*}\Delta \varepsilon\)). We saw that the square-root function led to faster convergence at noise-free conditions and at relatively low seeding densities, where the quicker progression to higher values of epsilon does not lead to critical ghost levels. At higher seeding densities and image noise though, a slower increase of epsilon better suppresses ghost particles and converges to higher values of found particles, thereby favoring the linear and the quadratic approach. Overall, the linear approach showed the most balanced results.
The process of finding the best value of ε for the current iterations could even be automated, further reducing the number of parameters and iterations.

4.9 About derivatives

As laid out in Sect. 3.3, this IPR implementation applies an analytic derivation of the cost function in order to determine the optimal direction and step for the repositioning of each particle in each time-step. Here we want to characterize the differences when comparing this approach to the one applied in Wieneke (2013), who used numeric derivatives by moving the particle Δp ± 0.1 px in all three directions of space, fitting a second-order polynomial to the three values in each direction and finally moving the particle to the minimum of this polynomial (or apply the maximum step of 0.1 px, in case no minimum is found). This ‘classic’ position optimization approach was implemented in the current framework, and two versions were used for reconstruction: one always using a Δp ± 0.1 px, the other using Δp ± 0.23 px and reductions to Δp ± 0.1 px at iteration 15 and to Δp ± 0.02 px at iteration 18. These values are comparable to the ones used for the analytic approach, with a maximum step with of 0.23 px in each direction of space corresponding to a maximum step width of 0.4 px in 3D space. Please note that Wienekes intensity update was not adopted—the one described in Sect. 3.3.2 is applied in all cases.
Figure 16 compares results from two reconstructions using numeric derivation to an otherwise identical reconstruction using analytic derivatives. As can be seen, the differences are minor. Only at very high particle image densities (NI = 0.16 ppp), the curves of found true and ghost particles differ appreciably. For this case, the reconstruction using numeric derivatives with a small step size (Δp ± 0.1 px) converges slowest; an increase to Δp ± 0.23 px accelerates convergence nearly to the level of the reconstruction using analytic derivation. At lower particle image densities, all algorithms perform basically identical. The positional accuracy of the case using a static value of Δp ± 0.1 px is typically lower than for the cases which reduce the step width for the final iterations. The analytic approach always retains a small accuracy benefit over both numeric approaches.
The findings above indicate that using numeric derivations is a viable alternative to analytic derivations in terms of reconstruction performance. However, the availability of analytic derivations allows for the application of other concepts to solve the reconstruction problem, like the ‘global shake’ bundle adjustment scheme discussed in the next chapter.

5 Decalibration correction using global shake

All previously presented reconstructions were using a perfectly calibrated camera system. In real-life experiments however, decalibrations are common. These might occur as low-frequency vibrations induced, e.g., by the apparatus (e.g., a wind tunnel) or as by slight movements of the camera system by temperature differences or simply people walking by. For the latter, it is possible to conduct a volume-self-calibration prior to the evaluation of every measurement run; however, this procedure might prove cumbersome. For time-resolved series of images, the calibration is typically corrected only for the beginning or the average of the evaluation; the individual time-steps might still suffer from vibrations occurring within the run-time of the experiment (Michaelis and Wolf 2011).
Due to all these reasons, an automatic calibration correction for every evaluated time-step is desirable. As laid out in Sect. 3.6, the ‘global shake’ method, consisting of a joint solution of the cost-function derivatives of all particles and additional parameters for the camera shift, can be applied. In order to document its functionality, test cases with intentional decalibrations were created. To this end, the four cameras were subjected to a shift of \(\Delta_{c}\) in both image space directions prior to the reconstruction. Cases with \(\Delta_{c} = 0.5\;{\text{px}}\) and \(\Delta_{c} = 0.1\;{\text{px}}\) were considered. The sign of \(\Delta_{c}\) was varied to avoid canceling-out of the shift between the different cameras. In order to better show the influence of the calibration correction, the first five IPR iterations of the reconstruction followed the standard procedure as introduced in Sect. 4.2. From the sixth iteration, five iterations of global shake were performed after the normal shaking using the Steepest-Descent Method. Further reconstructions were conducted, which completely lacked the calibration correction via global shake.
Figure 17 compares exemplary results at NI = 0.08 ppp of case I (no noise) in terms of found true particles, ghost particles, reprojected positional error and the decalibration of the camera system (as the average distance of reprojections of known 3D points in original and decalibrated system).
When looking at the results for the higher decalibration (Fig. 17, top), we see an initially high positional error of \({\Delta }_{{\text{f}}} = 0.84 px\), a low fraction of identified particles (\(R_{{\text{f}}} = 0.06\)) and quite high ghost levels (\(R_{{\text{g}}} = 0.63\)). When continuing the reconstruction without calibration correction (dashed lines), \({\Delta }_{{\text{f}}}\) is not significantly reduced, the fraction of ghost particles rises further to \(R_{{\text{g}}} = 1.40\) due to the creation of particle peak residuals by the decalibration. More true particles are found, however still only a fraction (\(R_{{\text{f}}} = 0.38\) at the end of the reconstruction). If on the other side global shake is performed, starting from iteration 6 (solid curves), the decalibration is corrected after approx. 10 further iterations. At the same time, the number of found true particles rises, the number of ghost particles is reduced until it reaches 0 and the positional error shrinks to \({\Delta }_{{\text{f}}} = 0.02\;{\text{px}}\). It can be seen that despite the high level of ghost particles, the global shake method was able to completely correct the decalibration.
Decalibration of an order of \(\Delta_{{\text{c}}} = 0.5\;{\text{px}}\) is frequently observed by us in experimental setups that have been left at rest for a couple of hours or days. Vibrations on the other hand can be of very different scale, ranging from tens of pixel in large industrial wind-tunnels (Novara et al. 2019b) to fine-scale shifts, induced, e.g., by a fan within the camera. Instantaneous large-scale shifts are out of scope for the global shake method—it relies on lines-of-sight that are still overlapping with the particle image. Shifts up to 1.2 px have been successfully corrected.
Larger vibrations can be handled if a time-resolved measurement is available. As the frequency of vibrations is typically lower than the recording rate, only small variations from time-step to time-step are encountered (once an initial shift is corrected). Such a situation is simulated with the second scenario (\(\Delta_{{\text{c}}} = 0.1\;{\text{px}}\)), results of which are shown in Fig. 17, bottom. It can be seen that the convergence behavior is much less affected than for the higher decalibration. Initially much more true particles are identified (\(R_{{\text{f}}} = 0.43\)), which converge to almost the full solution within the first five time-steps. The initial ghost particle level (\(R_{{\text{g}}} = 0.43\)) is quickly reduced, even without calibration correction (contrary to the higher decalibration case). Without corrections, the position accuracy remains limited to approx. \(\Delta_{{\text{f}}} = 0.12\;{\text{px}}\)—the same as the system decalibration, which is expected in a no-noise scenario, given the close relation of the two measures. As soon as the global shake is applied, the decalibration is corrected almost instantly and the positional error is reduced to the values seen for a perfectly calibrated case. This study demonstrates that an instantaneous correction of small vibrations using the global shake approach is feasible.
The shown reconstructions are from the noise-free case I (due to a better vividness); however, the method works the same for the noisy cases II and III.
The application of the camera correction using global shake in a real-life experiment is depicted in Fig. 18. Shown are the detected camera shifts for three cameras (out of six) for a time-series of 200 time-steps, which was recorded at a large-scale Rayleigh-Bénard Convection (RBC) experiment (see Bosbach et al. 2019). For Camera 1, an initial decalibration of (− 0.15 px, 0.19 px) is found for the x- and y-image-coordinate, which is stable over the run-time. For Camera 2, a larger initial decalibration (− 0.20 px, − 0.64 px) is seen. Additionally, this camera exhibits a clear vibration in the x-component with an amplitude of approx. 0.03 px and a frequency of approx. 0.69 Hz. The third camera exhibits only a small initial decalibration and shows a low-amplitude vibration at a frequency of approx. 15 Hz, that is modulated by a lower frequency, which could by an effect of cross-talk with the vibration of camera 2. This experimental test case demonstrates the useful application of global shake to correct for both initial decalibrations (eliminating the need for a separate volume self-calibration step before the evaluation of each measurement run) and vibrations occurring during the actual runtime.
One thing to keep in mind is that by (repeatedly) correcting the camera shifts, it is theoretically possible that the recalibrated camera system might be shifted/rotated/stretched in relation to original (the same is true for VSC). As a counter-measure, an affine transformation can be determined, which maps from the transformed coordinate system onto the old one.

6 Experimental validation

The extended IPR processing presented in this work has already been applied to a variety of experimental evaluations as part of the shake-the-box (Schanz et al. 2016) method. Therefore, we refrain from presenting a dedicated case here. Instead we refer to the following examples where the method has been applied: the study of a rotor wake (Wolf et al. 2019); homogeneous turbulence in water (Schröder et al. 2019); the flow over a surface mounted cube (Schröder et al. 2020); a large-scale turbulent boundary layer investigation (Schanz et al. 2019); a large-scale investigation of Rayleigh-Bénard Convection (Bosbach et al. 2019)—see also (Godbersen et al. 2020) for a Gallery of Fluid Motion presentation;
Examples for the use of advanced IPR in applications of multi-pulse STB (Novara et al. 2016a) are turbulent boundary layer investigations (Novara et al. 2019a), measurements of the flow on the suction side of a half wing model (Novara et al. 2019b) or investigations of a subsonic jet (Manovski et al. 2021).

7 Summary and outlook

Iterative particle reconstruction (IPR, Wieneke 2013) enabled the reconstruction of particle distributions at much higher particle image densities than previous methods. Together with the shake-the-box (STB, Schanz et al. 2016) algorithm, it initiated a paradigm shift in the field of volumetric flow measurements—replacing MART-based particle reconstructions and correlation-based 3D PIV methods by a direct determination of particle trajectories, while on the other side allowing to use particle image densities which are 10 to 30 times higher than possible for classical PTV approaches. The original method introduced by Wieneke allowed processing instantaneous images up to particle image densities of NI = 0.05 ppp; coupled with STB processing, this range could be extended to NI = 0.125 ppp when applied to time-series (Schanz et al. 2016), see also results of Case D at 0.1 ppp of the 4th Int. PIV Challenge (Kähler et al. 2016).
In this work, an extended IPR is introduced, which amends the original method in several areas. Combined, these enhancements lead to a significant jump in performance, allowing to fully reconstruct particle distributions with up to NI = 0.14 ppp on a single time-step. In perfect imaging conditions, nearly all particles are reconstructed, no ghost particles occur and the average positional error is below 1/100th of a pixel, even at such high particle image densities. With low levels of image noise and particle size variations, only the positional error is affected; for higher noise levels and a broad distribution of particle intensities, the detection of true particles is aggravated; however, the occurrence of ghost particles is still low. Such cases are much closer to experimental cases than a noise-free situation. In order to find all particles in such conditions, additional information from the temporal domain is required, which the STB method can provide.
Several changes in relation to the original IPR method, showing a great impact on the reconstruction capabilities, should be straight forward to integrate into existing implementations:
  • In the triangulation step, the cameras are queried for peaks at a certain order. This order has been shown to have a huge influence if a single particle can be successfully identified due to the special role of the first two cameras. Replacing this single triangulation by multiple triangulations, each with a different camera order and a subsequent combination of the triangulated particle clouds, proved to significantly increase the convergence in all image conditions and particle image densities.
  • Another measure, that showed an even greater impact, is to modify the intensity determination for a triangulated particle. Instead of using the average intensity found at the reprojection point on all cameras, the \(N_{{{\text{C}}i}}\) cameras that show the highest local intensity are ignored. The intensity is determined by averaging the remaining cameras. A value of \(N_{{{\text{C}}i}} = 2\) has shown the best results for the examined four-camera case. Compared to the standard processing (\(N_{{{\text{C}}i}} = 0\)), a very efficient suppression of ghost particles is found, which allows convergence even at high particle images densities.
  • Using an allowed triangulation error ε that is linearly increasing with the number of iterations proved to be beneficial, especially for cases with high noise levels. Compared to several cases using a fixed value of ε lower levels of ghost particles, better convergence and a higher stability is found.
All of these measures are easy to implement and come with little to no computational effort. Instead, due to the higher convergence rate, the number of applied iterations could be reduced.
A study of the number of outer (IPR) iterations and inner (shaking and filtering) iterations showed that increasing both is beneficial when aiming for very high particle image densities, but they can be kept at reasonable numbers for densities up to NI = 0.1 ppp.
Another novel feature of the presented implementation is the analytical derivation of the cost function in order to determine the direction in which a particle has to be shifted in the shake procedure. This approach showed surprisingly low benefits compared to the original method of using numerical derivation. However, with the knowledge of analytical derivatives it is possible to formulate a global cost function that can be minimized for all particles simultaneously, using standard methods like LBFGS. At the moment, this approach turned out to not be able to compete against the single-particle based shake algorithms due to the intensive optimization possibilities of the latter; however, this might change if specialized versions of the solver are developed and applied.
Still, this ‘global shake’ approach already has a useful purpose, as shifts of the cameras in relation to the original calibration can be introduced into the cost function. This allows to correct for both initial decalibrations—caused by slight movements of the cameras prior to a measurement—and vibrations occurring during the measurement runtime.
With an effective implementation, the IPR can be quite fast. Using optimal settings for inner and outer iterations, the current version converges in noisy conditions (case III) within 5.2 s at NI = 0.06 ppp and within 19.4 s at NI = 0.1 ppp (97,483 particles) on an AMD Threadripper 1950 × machine. Within a time-resolved STB evaluation, less iterations could be applied, reducing the computational cost significantly. The current implementation has been applied to many experimental cases, presented in other publications.
Further improvements of the IPR procedure are feasible. Some of these are:
  • For single-particle-based shaking, the information of the second derivative of the cost function could be used to find an optimal step width, eliminating the need to sample the cost function on multiple locations.
  • The global shake method could be adapted, such that addition and removal of particles are supported by the solver or that individual particle properties are better integrated, such as identifying the brightest cameras for the intensity update of each particle. Also, the cost function could not only include a two-dimensional shift per camera, accounting for vibrations. Instead, more complex corrections of the camera could be realized, such that instantaneous and local distortions of the calibration, induced, e.g., by density gradients or curved refractive index changes, could be captured.
  • An important task is an improved automatization of the parameter settings. Currently most parameters are tuned by hand, requiring an experienced user and time, especially if conditions change within several realizations of an experiment. An automated detection of parameters like the peak detection intensity threshold, the allowed triangulation error, the maximum step size of the shake algorithm, the intensity filter and the number of inner and outer iterations would greatly simplify (and likely speed up) the evaluation.
  • The filtering for ghost particles could include much more information than the particle intensity, which is the only criterion applied at the moment. Additionally, information like the intensity variations between the different cameras and the local derivative of the cost function could be used to further discern between true and ghost particles. Machine-learning-based methods, incorporating these information, are currently being tested.
  • Peak detection methods could be developed that are less susceptible to image noise could be applied. The currently used approach to look for image peaks above an intensity threshold does not take the particle image shape into account. As this shape is already known via the calibrated OTF, this additional information should be harnessed. Also in this case, Machine-Learning approaches might be a promising approach.

Acknowledgements

Open Access funding provided by Projekt DEAL.

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
go back to reference Atkinson C, Soria J (2009) An effcient simultaneous reconstruction technique for tomographic particle image velocimetry. Exp Fluids 47:563–578CrossRef Atkinson C, Soria J (2009) An effcient simultaneous reconstruction technique for tomographic particle image velocimetry. Exp Fluids 47:563–578CrossRef
go back to reference Bosbach J, Schanz D, Godbersen P, Schröder A (2019) Dense Lagrangian particle tracking of turbulent rayleigh bénard convection in a cylindrical sample using shake-the-box. In: 17th European turbulence conference (ETC 2019), Torino, Italy Bosbach J, Schanz D, Godbersen P, Schröder A (2019) Dense Lagrangian particle tracking of turbulent rayleigh bénard convection in a cylindrical sample using shake-the-box. In: 17th European turbulence conference (ETC 2019), Torino, Italy
go back to reference Chiu WC, Rib LN (1956) The rate of dissipation of energy and the energy spectrum in a low-speed turbulent jet. Eos, Trans Am Geophys Union 37(1):13–26CrossRef Chiu WC, Rib LN (1956) The rate of dissipation of energy and the energy spectrum in a low-speed turbulent jet. Eos, Trans Am Geophys Union 37(1):13–26CrossRef
go back to reference Cierpka C, Lütke B, Kähler CJ (2013) Higher order multi-frame particle tracking velocimetry. Exp Fluids 54(5):1533CrossRef Cierpka C, Lütke B, Kähler CJ (2013) Higher order multi-frame particle tracking velocimetry. Exp Fluids 54(5):1533CrossRef
go back to reference Cornic P, Leclaire B, Champagnat F, Le Besnerais G, Cheminet A, Illoul C, Losfeld G (2020) Double-frame tomographic PTV at high seeding densities. Exp Fluids 61(2):1–24CrossRef Cornic P, Leclaire B, Champagnat F, Le Besnerais G, Cheminet A, Illoul C, Losfeld G (2020) Double-frame tomographic PTV at high seeding densities. Exp Fluids 61(2):1–24CrossRef
go back to reference Elsinga GE, Scarano F, Wieneke B, van Oudheusden BW (2006) Tomographic particle image velocimetry. Exp Fluids 41:933–947CrossRef Elsinga GE, Scarano F, Wieneke B, van Oudheusden BW (2006) Tomographic particle image velocimetry. Exp Fluids 41:933–947CrossRef
go back to reference Elsinga GE, Westerweel J, Scarano F, Novara M (2011) On the velocity of ghost particles and the bias errors in tomographic-PIV. Exp Fluids 50(4):825–838CrossRef Elsinga GE, Westerweel J, Scarano F, Novara M (2011) On the velocity of ghost particles and the bias errors in tomographic-PIV. Exp Fluids 50(4):825–838CrossRef
go back to reference Gesemann S, Huhn F, Schanz D, Schroder A (2016) From noisy particle tracks to velocity, acceleration and pressure fields using B-splines and penalties. In: 18th International symposium on application of laser and imaging techniques to fluid mechanics. Lisbon, Portugal Gesemann S, Huhn F, Schanz D, Schroder A (2016) From noisy particle tracks to velocity, acceleration and pressure fields using B-splines and penalties. In: 18th International symposium on application of laser and imaging techniques to fluid mechanics. Lisbon, Portugal
go back to reference Huhn F, Schanz D, Gesemann S, Dierksheide U, van de Meerendonk R, Schroder A (2017) Large-scale volumetric flow measurement in a pure thermal plume by dense tracking of helium-filled soap bubbles. Exp Fluids 58:116CrossRef Huhn F, Schanz D, Gesemann S, Dierksheide U, van de Meerendonk R, Schroder A (2017) Large-scale volumetric flow measurement in a pure thermal plume by dense tracking of helium-filled soap bubbles. Exp Fluids 58:116CrossRef
go back to reference Jahn T, Schanz D, Gesemann S, Schröder A (2017) 2-Pulse STB: 3D particle tracking at high particle image densities. In: 12th international symposium on PIV–PIV17. Busan, Korea Jahn T, Schanz D, Gesemann S, Schröder A (2017) 2-Pulse STB: 3D particle tracking at high particle image densities. In: 12th international symposium on PIV–PIV17. Busan, Korea
go back to reference Kähler CJ, Astarita T, Vlachos PP, Sakakibara J, Hain R, Discetti S, La Foy R, Cierpka C (2016) Main results of the 4th International PIV Challenge. Exp Fluids 57(6):97CrossRef Kähler CJ, Astarita T, Vlachos PP, Sakakibara J, Hain R, Discetti S, La Foy R, Cierpka C (2016) Main results of the 4th International PIV Challenge. Exp Fluids 57(6):97CrossRef
go back to reference Lynch K, Scarano F (2015) An effcient and accurate approach to MTE-MART for time-resolved tomographic PIV. Exp Fluids 56:66CrossRef Lynch K, Scarano F (2015) An effcient and accurate approach to MTE-MART for time-resolved tomographic PIV. Exp Fluids 56:66CrossRef
go back to reference Maas HG, Grün A, Papantoniou D (1993) Particle tracking in three dimensional turbulent flows—part I: photogrammetric determination of particle coordinates. Exp Fluids 15:133–146CrossRef Maas HG, Grün A, Papantoniou D (1993) Particle tracking in three dimensional turbulent flows—part I: photogrammetric determination of particle coordinates. Exp Fluids 15:133–146CrossRef
go back to reference Malik N, Dracos T, Papantoniou D (1993) Particle tracking in three dimensional turbulent flows—part II: particle tracking. Exp Fluids 15:279–294CrossRef Malik N, Dracos T, Papantoniou D (1993) Particle tracking in three dimensional turbulent flows—part II: particle tracking. Exp Fluids 15:279–294CrossRef
go back to reference Manovski P, Novara M, Mohan NKD, Geisler R, Schanz D, Agocs J, Godbersen P, Schröder A (2021) 3D lagrangian particle tracking of a subsonic jet using multi-pulse shake-the-box. Exp Therm Fluid Sci 123:110346CrossRef Manovski P, Novara M, Mohan NKD, Geisler R, Schanz D, Agocs J, Godbersen P, Schröder A (2021) 3D lagrangian particle tracking of a subsonic jet using multi-pulse shake-the-box. Exp Therm Fluid Sci 123:110346CrossRef
go back to reference Michaelis D, Novara M, Scarano F,Wieneke B (2010) Comparison of volume reconstruction techniques at different particle densities. In: 15th international symposium on applications of laser techniques to fluid mechanics Michaelis D, Novara M, Scarano F,Wieneke B (2010) Comparison of volume reconstruction techniques at different particle densities. In: 15th international symposium on applications of laser techniques to fluid mechanics
go back to reference Michaelis D, Wolf CC (2011) Vibration compensation for tomographic PIV using single image volume self calibration. In: 9th international symposium on particle image velocimetry—PIV11. Kobe, Japan Michaelis D, Wolf CC (2011) Vibration compensation for tomographic PIV using single image volume self calibration. In: 9th international symposium on particle image velocimetry—PIV11. Kobe, Japan
go back to reference Nayler JL, Frazer BA (1917) Preliminary report upon an experimental method of investigating, by the aid of kinematographic photography, the history of eddying flow past a model immersed in water. Tech. rep. of the Advisory Committee for aeronautics for the year 1917–1918, London, p 6 Nayler JL, Frazer BA (1917) Preliminary report upon an experimental method of investigating, by the aid of kinematographic photography, the history of eddying flow past a model immersed in water. Tech. rep. of the Advisory Committee for aeronautics for the year 1917–1918, London, p 6
go back to reference Neeteson NJ, Bhattacharya S, Rival DE, Michaelis D, Schanz D, Schröder A (2016) Pressure-field extraction from Lagrangian flow measurements: first experiences with 4D-PTV data. Exp Fluids 57(6):1–18CrossRef Neeteson NJ, Bhattacharya S, Rival DE, Michaelis D, Schanz D, Schröder A (2016) Pressure-field extraction from Lagrangian flow measurements: first experiences with 4D-PTV data. Exp Fluids 57(6):1–18CrossRef
go back to reference Nishino K, Kasagi N, Hirata M (1989) Three-dimensional particle tracking velocimetry based on automated digital image processing. Trans ASME J Fluid Eng 111:384–390CrossRef Nishino K, Kasagi N, Hirata M (1989) Three-dimensional particle tracking velocimetry based on automated digital image processing. Trans ASME J Fluid Eng 111:384–390CrossRef
go back to reference Novara M, Batenburg KJ, Scarano F (2010) Motion tracking enhanced MART for tomographic PIV. Meas Sci Technol 21:035401CrossRef Novara M, Batenburg KJ, Scarano F (2010) Motion tracking enhanced MART for tomographic PIV. Meas Sci Technol 21:035401CrossRef
go back to reference Novara M, Scarano F (2013) A particle-tracking approach for accurate material derivative measurements with tomographic PIV. Exp Fluids 54:1584CrossRef Novara M, Scarano F (2013) A particle-tracking approach for accurate material derivative measurements with tomographic PIV. Exp Fluids 54:1584CrossRef
go back to reference Novara M, Schanz D, Reuther N, Kähler CJ, Schröder A (2016a) Lagrangian 3D particle tracking in high-speed flows: shake-the-box for multi-pulse systems. Exp Fluids 57:128CrossRef Novara M, Schanz D, Reuther N, Kähler CJ, Schröder A (2016a) Lagrangian 3D particle tracking in high-speed flows: shake-the-box for multi-pulse systems. Exp Fluids 57:128CrossRef
go back to reference Novara M, Schanz D, Gesemann S, Lynch KP, Schröder A (2016b) Lagrangian 3D particle tracking for multi-pulse systems: performance assessment and application of shake-the-box. In: 18th International symposium on applications of laser and imaging techniques to fluid mechanics. Lisbon, Portugal Novara M, Schanz D, Gesemann S, Lynch KP, Schröder A (2016b) Lagrangian 3D particle tracking for multi-pulse systems: performance assessment and application of shake-the-box. In: 18th International symposium on applications of laser and imaging techniques to fluid mechanics. Lisbon, Portugal
go back to reference Novara M, Schanz D, Geisler R, Gesemann S, Voss C, Schröder A (2019a) Multi-exposed recordings for 3d lagrangian particle tracking with multi-pulse shake-the-box. Exp Fluids 60(3):1–19CrossRef Novara M, Schanz D, Geisler R, Gesemann S, Voss C, Schröder A (2019a) Multi-exposed recordings for 3d lagrangian particle tracking with multi-pulse shake-the-box. Exp Fluids 60(3):1–19CrossRef
go back to reference Novara M, Schanz D, Geisler R, Gesemann S, Philipp F, Agocs J, Schröder A (2019b) Instantaneous self-calibration of a 3D imaging system in industrial facility with strong vibrations. In: Proceedings of the 13th international symposium on particle image velocimetry. No 180, pp 1133–1143 Novara M, Schanz D, Geisler R, Gesemann S, Philipp F, Agocs J, Schröder A (2019b) Instantaneous self-calibration of a 3D imaging system in industrial facility with strong vibrations. In: Proceedings of the 13th international symposium on particle image velocimetry. No 180, pp 1133–1143
go back to reference Raffel M, Willert CE, Scarano F, Kähler CJ, Wereley ST, Kompenhans J (2018) Particle image velocimetry: a practical guide (Digital Content). Springer, BerlinCrossRef Raffel M, Willert CE, Scarano F, Kähler CJ, Wereley ST, Kompenhans J (2018) Particle image velocimetry: a practical guide (Digital Content). Springer, BerlinCrossRef
go back to reference Schanz D, Gesemann S, Schröder A, Wieneke B, Novara M (2013a) Non-uniform optical transfer functions in particle imaging: calibration and application to tomographic reconstruction. Meas Sci Technol 24:024009CrossRef Schanz D, Gesemann S, Schröder A, Wieneke B, Novara M (2013a) Non-uniform optical transfer functions in particle imaging: calibration and application to tomographic reconstruction. Meas Sci Technol 24:024009CrossRef
go back to reference Schanz D, Schröder A, Gesemann S, Michaelis D and Wieneke B (2013b) Shake-the-box: a highly efficient and accurate Tomographic Particle Tracking Velocimetry (TOMO-PTV) method using prediction of particle position, 10th Symposium on PIV, Delft, The Netherlands Schanz D, Schröder A, Gesemann S, Michaelis D and Wieneke B (2013b) Shake-the-box: a highly efficient and accurate Tomographic Particle Tracking Velocimetry (TOMO-PTV) method using prediction of particle position, 10th Symposium on PIV, Delft, The Netherlands
go back to reference Schanz D, Schröder A, Novara M, Geisler R, Agocs J, Eich F, Bross M, Kähler CJ (2019). Large-scale volumetric characterization of a turbulent boundary layer flow. In“ Proceedings of the 13th international symposium on particle image velocimetry. No 182, pp 251–265 Schanz D, Schröder A, Novara M, Geisler R, Agocs J, Eich F, Bross M, Kähler CJ (2019). Large-scale volumetric characterization of a turbulent boundary layer flow. In“ Proceedings of the 13th international symposium on particle image velocimetry. No 182, pp 251–265
go back to reference Schneiders JFG, Scarano F (2016) Dense velocity reconstruction from tomographic PTV with material derivatives. Exp Fluids 57:9CrossRef Schneiders JFG, Scarano F (2016) Dense velocity reconstruction from tomographic PTV with material derivatives. Exp Fluids 57:9CrossRef
go back to reference Schröder A, Geisler R, Staak K, Wieneke B, Elsinga G, Scarano F, Henning A (2011) Lagrangian and Eulerian views into a turbulent boundary layer flow using time-resolved tomographic PIV. Exp Fluids 50:1071–1091CrossRef Schröder A, Geisler R, Staak K, Wieneke B, Elsinga G, Scarano F, Henning A (2011) Lagrangian and Eulerian views into a turbulent boundary layer flow using time-resolved tomographic PIV. Exp Fluids 50:1071–1091CrossRef
go back to reference Schröder A, Schanz D, Gesemann S, Huhn F, Paz DG, Perez-Munuzuri V, Bodenschatz E (2019) Measuring the full velocity gradient and dissipation rate tensor in homogeneous turbulence using shake-the-box and flowfit. In: 17th European turbulence conference 2019. Torino, Italy Schröder A, Schanz D, Gesemann S, Huhn F, Paz DG, Perez-Munuzuri V, Bodenschatz E (2019) Measuring the full velocity gradient and dissipation rate tensor in homogeneous turbulence using shake-the-box and flowfit. In: 17th European turbulence conference 2019. Torino, Italy
go back to reference Schröder A, Willert C, Schanz D, Geisler R, Jahn T, Gallas Q, Leclaire B (2020) The flow around a surface mounted cube: a characterization by time-resolved PIV, 3D Shake-The-Box and LBM simulation. Exp Fluids 61(9):1–22CrossRef Schröder A, Willert C, Schanz D, Geisler R, Jahn T, Gallas Q, Leclaire B (2020) The flow around a surface mounted cube: a characterization by time-resolved PIV, 3D Shake-The-Box and LBM simulation. Exp Fluids 61(9):1–22CrossRef
go back to reference Tan S, Salibindla A, Masuk AUM, Ni R (2020) Introducing OpenLPT: new method of removing ghost particles and high-concentration particle shadow tracking. Exp Fluids 61(2):1–16CrossRef Tan S, Salibindla A, Masuk AUM, Ni R (2020) Introducing OpenLPT: new method of removing ghost particles and high-concentration particle shadow tracking. Exp Fluids 61(2):1–16CrossRef
go back to reference van Gent PL, Michaelis D, Van Oudheusden BW, Weiss P-É, De Kat R, Laskari A, Yj J, David L, Schanz D, Huhn F, Gesemann S, Novara M, McPhaden C, Neeteson NJ, Rival DE, Schneiders JFG, Schrijer FFJ (2017) Comparative assessment of pressure field reconstructions from PIV measurements and Lagrangian particle tracking. Exp Fluids 58:33CrossRef van Gent PL, Michaelis D, Van Oudheusden BW, Weiss P-É, De Kat R, Laskari A, Yj J, David L, Schanz D, Huhn F, Gesemann S, Novara M, McPhaden C, Neeteson NJ, Rival DE, Schneiders JFG, Schrijer FFJ (2017) Comparative assessment of pressure field reconstructions from PIV measurements and Lagrangian particle tracking. Exp Fluids 58:33CrossRef
go back to reference Wieneke B (2008) Volume self-calibration for 3D particle image velocimetry. Exp Fluids 45:549–556CrossRef Wieneke B (2008) Volume self-calibration for 3D particle image velocimetry. Exp Fluids 45:549–556CrossRef
go back to reference Wieneke B (2013) Iterative reconstruction of volumetric particle distribution. Meas Sci Technol 24:024008CrossRef Wieneke B (2013) Iterative reconstruction of volumetric particle distribution. Meas Sci Technol 24:024008CrossRef
go back to reference Wolf CC, Schwarz C, Kaufmann K, Gardner AD, Michaelis D, Bosbach J, Schanz D, Schröder A (2019) Experimental study of secondary vortex structures in a rotor wake. Exp Fluids 60(11):1–16CrossRef Wolf CC, Schwarz C, Kaufmann K, Gardner AD, Michaelis D, Bosbach J, Schanz D, Schröder A (2019) Experimental study of secondary vortex structures in a rotor wake. Exp Fluids 60(11):1–16CrossRef
Metadata
Title
Advanced iterative particle reconstruction for Lagrangian particle tracking
Authors
Tobias Jahn
Daniel Schanz
Andreas Schröder
Publication date
01-08-2021
Publisher
Springer Berlin Heidelberg
Published in
Experiments in Fluids / Issue 8/2021
Print ISSN: 0723-4864
Electronic ISSN: 1432-1114
DOI
https://doi.org/10.1007/s00348-021-03276-7

Other articles of this Issue 8/2021

Experiments in Fluids 8/2021 Go to the issue

Premium Partners