Skip to main content
Top
Published in: Experiments in Fluids 4/2024

Open Access 01-04-2024 | Research Article

Improving depth uncertainty in plenoptic camera-based velocimetry

Authors: Mahyar Moaven, Abbishek Gururaj, Vrishank Raghav, Brian Thurow

Published in: Experiments in Fluids | Issue 4/2024

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This work describes the development of a particle tracking velocimetry (PTV) algorithm designed to improve three-dimensional (3D), three-component velocity field measurements using a single plenoptic camera. Particular focus is on mitigating the longstanding depth uncertainty issues that have traditionally plagued plenoptic particle image velocimetry (PIV) experiments by leveraging the camera’s ability to generate multiple perspective views of a scene in order to assist both particle triangulation and tracking. 3D positions are first estimated via light field ray bundling (LFRB) whereby particle rays are projected into the measurement volume using image-to-object space mapping. Tracking is subsequently performed independently within each perspective view, providing a statistical amalgamation of each particle’s predicted motion through time in order to help guide 3D trajectory estimation while simultaneously protecting the tracking algorithm from physically unreasonable fluctuations in particle depth positions. A synthetic performance assessment revealed a reduction in the average depth errors obtained by LFRB as compared to the conventional multiplicative algebraic reconstruction technique when estimating particle locations. Further analysis using a synthetic vortex ring at a magnification of − 0.6 demonstrated plenoptic-PIV capable of maintaining the equivalent of 0.1–0.15 voxel accuracy in the depth domain at a spacing to displacement ratio of 5.3–10.5, an improvement of 84–89% compared to plenoptic-PIV. Experiments were conducted at a spacing to displacement ratio of approximately 5.8 to capture the 3D flow field around a rotor within the rotating reference frame. The resulting plenoptic-PIV/PTV vector fields were evaluated with reference to a fixed frame stereoscopic-PIV (stereo-PIV) validation experiment. A systematic depth-wise (radial) component of velocity directed toward the wingtip, consistent with observations from prior literature and stereo-PIV experiments, was captured by plenoptic-PTV at magnitudes similar to the validation data. In contrast, the plenoptic-PIV did not discern any coherent indication of radial motion. Our algorithm constitutes a significant advancement in enhancing the functionality and versatility of single-plenoptic camera flow diagnostics by directly addressing the primary limitation associated with plenoptic imaging.

Graphical abstract

Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Plenoptic particle image velocimetry (plenoptic-PIV) is a nonintrusive volumetric flow measurement technique based on principles of light field imaging whereby angular information, in addition to spatial information, of light rays emanating from particles is encoded into images. This is achieved by inserting an array of microlenses between the main lens and sensor, splitting up incoming rays according to their angle of incidence, allowing for computational refocusing through the depth domain in addition to perspective image generation across the aperture plane. Plenoptic-PIV has established itself as a powerful single-camera 3D optical flow diagnostics tool, offering a practical alternative to its multi-camera counterparts (e.g., tomographic particle image velocimetry; tomo-PIV) (Fahringer et al 2015). Its simplicity of setup and low optical access requirements have allowed the technique to be applied to a wide range of applications, including studying the flow field around the comb jelly (Tan et al 2020), the wake behind hemispherical roughness elements (Johnson et al 2017), and shock-wave/boundary-layer interactions (Jones et al 2020). Gururaj et al (2021) introduced rotating 3D velocimetry (R3DV) as a novel single-camera technique utilizing plenoptic imaging to acquire 3D velocity fields around a rotor within the rotating frame of reference. The concept of R3DV is illustrated in Fig. 1, wherein a plenoptic camera captures images of particles over a rotor via a mirror attached to a simultaneously rotating hub. 3D particle positions can be deduced by processing of plenoptic images before applying velocimetry algorithms to analyze particle motion through time. Many rotating flows of interest, such as the evolution of leading-edge vortices (LEVs) on insect wings and rotorcraft, have significant radial components of velocity. When viewed in the rotating reference frame, i.e., the viewpoint of a plenoptic camera in an R3DV setup, this radial flow is observed as an out-of-plane displacement (e.g., red star in Fig. 1). However, the limited baseline parallax of the plenoptic system manifests itself in an out-of-plane uncertainty that has been quantified as being 5\(\times\) higher than that of the in-plane (Fahringer et al 2015). This issue becomes increasingly prominent at lower magnifications as the distance of the imaged scene relative to the aperture diameter rises, and is of particular concern in applications where the fundamental flow physics depend on capturing particle displacements in the direction of depth relative to the camera.
While the inception of R3DV followed traditional plenoptic flow diagnostics by adopting 3D-PIV techniques, tomographic experiments have recently seen a shift toward particle tracking velocimetry (PTV) with a commonly cited advantage of the latter being the production of a velocity vector at each particle location, whereas the former requires 4–10 particles to provide a local displacement measurement (Doh et al 2012; Schanz et al 2016; Schröder and Schanz 2023). This allows PTV to extract pointwise statistics as well as avoid the low-pass filtering effects of velocity gradients inherent to PIV. In its current form, R3DV vector fields retrieved from plenoptic-PIV significantly underestimate out-of-plane displacements, a natural consequence of averaging a group of independent random uncertainties in each PIV subregion. Thus, an alternative processing technique that accurately captures the radial flow and allows the plenoptic camera to be considered a truly 3D/3C measurement system is needed. This work presents a novel plenoptic particle tracking algorithm developed specifically to mitigate the high instantaneous depth uncertainties inherent to low-magnification plenoptic imaging experiments (e.g., R3DV) by leveraging each particle’s temporal history along with the multitude of views available from a plenoptic image. In particular, knowledge of a particle’s previous states allows for regularization via a fitting function, while the myriad of viewpoints aid in promoting correct particle correspondence between adjacent time points.
Although considerable work has been undertaken to validate and optimize plenoptic-PIV, there is a paucity of research to demonstrate the feasibility of applying PTV to plenoptic images. Particle tracking in tomographic imaging is commonly performed in 3D space following the estimation of positions using triangulation-based localization algorithms. The plenoptic camera’s limited baseline parallax presents a unique challenge in applying existing 3D-PTV techniques, as temporal matching is compromised due to particle positions randomly fluctuating in the out-of-plane domain, the effects of which become increasingly problematic at higher seeding densities. In addition, these fluctuations rapidly amplify as the magnification is reduced. Thus, previous efforts have employed plenoptic refocusing to recover depth prior to tracking. These techniques are similar to synthetic aperture PIV which also drew inspiration from light field imaging (Belden et al 2010). The works of Bajpayee and Techet (2013) and Liu et al (2019) found depth positions of particle images by applying the maximum sharpness principle to incrementally refocused images prior to using the relaxation algorithm described in Ohmi and Li (2000) to perform tracking. Both studies successfully showcased the potential of plenoptic-PTV, with the former obtaining accurate Lagrangian tracks for a synthetic vortex ring while the latter applied the method to turbulent flow in a water tunnel and found qualitative consistency, although Bajpayee and Techet (2013) noted the limits of the algorithm with respect to seeding density and displacement magnitude were yet to be explored. Chen and Sick (2016, 2017) showcased the application of plenoptic imaging in internal combustion engines where optical access is limited. Chen and Sick (2016) performed 3D plenoptic-PTV on jet flow where their results showed qualitative agreement to the authors’ expectations of engine-like air flow as well as consistency with 2D-PIV. However, in a performance test of their chosen PTV algorithm—the commercially available RxFlow 3.1 software from Raytrix GmbH—they reported modest vector yields (30.1%), citing the algorithm’s difficulty in accurately evaluating positions in dense regions and its minimal displacement constraint for tracking as potential limitations (Chen and Sick 2017). In light of this, the authors opted to average multiple instances of the imaged flow rather than obtain instantaneous 3D/3C vector fields. Nobes et al (2014) also applied the refocusing scheme of Raytrix software to acquire 3D particle positions and performed PTV with an in-house algorithm based on a two-frame relaxation approach. Their method proved capable of capturing swirling motion at a relatively low particle density; however, the authors acknowledged large degrees of randomness in out-of-plane motion as a focal point for future improvements. While refocusing is a proven technique, Hall et al (2019) showed that the finite depth of field of the imaging system introduces high levels of ambiguity in determining depth via particle image sharpness criterion and suggested a triangulation method based on the generation of plenoptic perspective views as a preferable alternative. In summary, previous studies have incorporated tracking algorithms that have demonstrated success in tomographic and stereoscopic imaging but are not specifically designed to address the challenges associated with plenoptic imaging, making this an appealing topic for further work.
This paper presents a particle reconstruction and tracking scheme developed to overcome limitations associated with relatively high depth uncertainties inherent to plenoptic imaging. Integral to this method is the plenoptic camera’s perspective shifting capability, which is applied to generate multiple viewpoints of a particle field, allowing for 3D triangulation and independent tracking of particle projections through the temporal domain of discrete 2D views. Coupling these concepts together to build 3D tracks while enforcing constraints in both 2D and 3D domains lends itself to an increased likelihood of correct temporal correspondence by protecting the tracking scheme from out-of-plane uncertainties, subsequently allowing fitting of noisy 3D tracks to functions that better represent physically reasonable particle motion. Section 2 details our plenoptic-PTV algorithm, while Sect. 3 introduces synthetic and experimental datasets that were used for validation and benchmarking. Section 4 presents and discusses results from comparisons to plenoptic-PIV where it is revealed that plenoptic-PTV produces an order of magnitude drop in out-of-plane velocity error at lower particle displacements, providing significant headway toward improving the quality of 3D/3C vector fields obtained via single-plenoptic camera imaging.

2 Plenoptic-PTV Methodology

The algorithm presented herein applies a triangulation-based approach, termed Light Field Ray Bundling (LFRB), to reconstruct particle fields in 3D space from plenoptic perspective images prior to tracking. An earlier implementation of LFRB was introduced by Clifford et al (2019) and will be referred to herein as LFRB 2019 while the latest version will be denoted as LFRB 2023. Figures 2 and 3 present the LFRB 2023 workflow and the particle tracking scheme, respectively. The italicized labels in each flowchart correspond to subsections that discuss the relevant portion of the algorithm. The primary novelty of the method lies in its hybrid use of 2D and 3D particle position information; the 2D projection of particles on perspective images, found during LFRB, is tracked while simultaneously constraining 2D and in-plane 3D displacements. The resulting collection of 2D tracks subsequently guides the construction of temporal correspondences in 3D. Although this work was developed independently, Wu et al (2021) made significant strides in increasing the seeding density limits of stereo-PTV experiments after creating a similar hybrid 2D/3D tracking method to reconstruct 3D tracks.

2.1 Light field ray bundling

LFRB begins by decoding raw plenoptic images into perspective images that represent a collection of unique views of particles in the flow field. Figure 2a and 2b illustrates an example of perspective generation, whereby subsampling pixels at matching positions relative to each microlens center creates perspective views of 5 imaged particles. By calculating a particle’s subpixel position in each perspective and knowing the corresponding aperture location, a particle’s ray can be represented in image space by a two-plane parameterization, denoted by solid lines between the microlens and main lens planes in the ray diagram of Fig. 2c. The ray is then projected into object space (dashed rays in Fig. 2c) via light-field calibration, which maps the world view to the imaging plane. This is performed for all identified centers across all perspectives, allowing the correspondence problem to be solved by searching for convergence of rays into tight cores (bundles) whose centers indicate possible particle locations. The remainder of this subsection details LFRB 2023, highlighting enhancements introduced following LFRB 2019.

2.1.1 Particle identification, reprojection, and removal

LFRB 2019 incorporated the dynamic threshold segmentation (DTS; Aether Lab 2015) method to identify particle images. Starting with local intensity peaks, DTS designates neighboring pixels to a particle until a background intensity threshold is met or a nearby particle is encountered. DTS has proved very effective in distinguishing particles in noisy and dense images; however, the computational time is significant, and dim particles can remain unidentified due to the thresholding nature of the scheme. Particle centers are estimated on a subpixel grid through weighted centroiding—a computationally inexpensive scheme, but one whose accuracy can be compromised when background intensities approach those of the particle image.
LFRB 2023 introduces the option of using particle mask correlation (PMC; Takehara and Etoh 1998) as a less computationally intensive alternative to DTS. PMC performs a normalized cross-correlation of each perspective image with a predefined particle mask, typically represented by a Gaussian intensity distribution. Areas of the image with a correlation coefficient higher than a user-selected threshold are determined to be particle candidates, and the pixel locations of each correlation peak are stored accordingly. Takehara and Etoh (1998) recommend a cross-correlation threshold of 0.7 for most experimental applications. PMC and DTS offer their own distinct advantages, and the choice of identification algorithm is typically determined by the image quality. A primary benefit of PMC is that identification of particles does not depend on their intensity, rather on their adherence to a Gaussian intensity distribution, although care must be taken to filter Gaussian image noise through preprocessing. Subpixel peak estimation has been enhanced to incorporate two additional methods: 1D Gaussian fitting along horizontal and vertical axes and 2D nonlinear least-squares Gaussian fitting. Gaussian-based schemes are often selected for subpixel fitting as the function mimics the normal distribution of a particle’s point spread function. In a comparison of the fitting of a three-point Gaussian in 1D and a least squares Gaussian estimator applied to circular symmetrical particle images, Marxen et al (2000) found that the latter implementation is significantly slower while resulting in similar errors, suggesting the additional complexity of a 2D Gaussian fit to be unnecessary. However, particle images that exhibit elliptical intensity patterns, for example, due to astigmatism toward the edges of the image, will require the use of a 2D fitting procedure (Nobach and Honkanen 2005). As such, Gaussian schemes were incorporated in LFRB 2023 to provide more sophisticated options when computational expense is not of concern or substantial levels of background noise make weighted centroiding unsuitable for peak estimation. An adaptive evaluation window was incorporated to maximize the amount of data used for subpixel calculations. The size of the window varies depending on the local particle density; for example, if a particle is in an isolated region, a maximum window size of 11\(\times\)11 can be used, whereas if another peak is found within twice the window radius, the window will gradually reduce in size down to 3\(\times\)3 depending on the distance to the neighboring peak. If the image suffers from noise, the user may reduce the maximum evaluation window size to mitigate interference with subpixel measurements.
At a seeding density typical of PIV/PTV experiments, particle overlap will prevent 100% yield in particle identification for a given view. LFRB allows the user to set a threshold for the number of views in which a particle must be identifiable, below which a set of peaks are assumed to be derived from noise and subsequently rejected as a particle candidate. As an additional measure to combat particle overlap, LFRB employs a particle image removal technique based on the Iterative Particle Reconstruction method (Wieneke 2012). LFRB 2019 used the perspective-based median of the observed particle image as the optical transfer function (OTF). We extend this approach to account for non-uniform projection behavior across the aperture plane by fitting a Gaussian OTF to each quadrant of each perspective image. First, a lower particle density image is captured before or after the primary experiment and decoded into perspectives. Particles are identified in each view and a 2D elliptical Gaussian of the following form is fitted to each particle:
$$\begin{aligned} f(s_{i},t_{i}) = a + be^{-(c{s'}^2 + d{t'}^2)} \end{aligned}$$
(1)
where
$$\begin{aligned} \begin{aligned} s' = (s_{i}-m)\cos {\theta } + (t_{i}-n)\sin {\theta } \\ t' = -(s_{i}-m)\sin {\theta } + (t_{i}-n)\cos {\theta } \end{aligned} \end{aligned}$$
(2)
allow the function to exhibit rotation. Variables \({a, b, c, d, m, n, \theta }\) are optimization parameters pertaining to a background constant, intensity, standard deviations in s and t, center positions in s and t, and Gaussian orientation, respectively. The Gaussian coefficients are averaged and stored for each quadrant of every perspective. In the event that imaging the flow field at a lower particle density proves to be cumbersome, the code only considers particles in sparse regions. The optical transfer function is applied at the end of every iteration of LFRB whereby the triangulated particles are reprojected onto each perspective image. The reprojections are then subtracted from the original images, allowing subsequent iterations of LFRB to reconstruct a particle field gradually reducing in density. The number of LFRB iterations can vary depending on particle density, and the process terminates when fewer than five new particles are found.

2.1.2 Ray projection and bundling

Once particle candidates are independently identified in each perspective image, LFRB aims to solve the correspondence problem of correctly matching particle images from different views by projecting rays from each perspective through the imaged volume. The ray diagram in Fig. 2c illustrates this process using rays colored according to their perspective of origin. For a single particle, these rays should converge to a point in object space representing their source. The challenge here is to sort all the rays across the extracted perspective views into ‘bundles,’ with each bundle corresponding to a particle and consisting of a maximum of one ray from each perspective image.
A prerequisite for projecting rays through the imaged volume is calibration between the object and image space. The calibration function employed in this work is a generic higher order polynomial that relates object space coordinates to image coordinates via the aperture plane. This process, called direct light field calibration (DLFC), involves imaging a calibration target at discrete depths through the imaged volume. Perspective image stacks are generated at each depth, and known 3D coordinates (xyz) are related to pixel coordinates (st) via the corresponding lens aperture positions (uv) through a generic polynomial mapping function:
$$\begin{aligned} \begin{aligned} s = F(x,y,z,u,v,s_{c}) \\ t = F(x,y,z,u,v,t_{c}) \end{aligned} \end{aligned}$$
(3)
where \(s_{c}\) and \(t_{c}\) are coefficients solved in a nonlinear least squares fashion. For more details on this process, refer to Hall et al (2018).
The calibration polynomial can subsequently be applied to obtain a two-plane parameterization of a particle’s ray in object space; using the particle image centers (st) and the corresponding perspective image location (uv), the positions of the particle ray (xy) at two prescribed z-planes, denoted as z1 and z2 in Fig. 2c, are calculated. Determining the optimal placement of z1 and z2 is left for future work, although an obvious limit comprises the extremes of the calibrated domain. Knowing the intersection points of a ray in two planes, a parametric equation is formed to represent the ray’s path through space (Levoy 2006). If \(N_{uv}\) represents the number of perspective images in which a particle is successfully identified, there will exist \(N_{uv}\) parametric equations to represent that particle’s collection of rays intersecting discretized positions across the main lens aperture. Ray bundling refers to the process of sorting through each ray equation corresponding to every identified particle image across the stack of generated perspectives in order to categorize rays according to their origin whereby rays of a single particle will converge to a compact region. The computation of the ray equations matches the process followed by Clifford et al (2019) with the exception that the two-plane parameterization in that work was performed directly in image space between the aperture and microlens planes. A particle ray’s path through image space is readily available from knowledge of its subpixel center position in a perspective image; hence, calibration is not applied at this stage in LFRB 2019. Synthetic tests demonstrated that bundling in image space failed to account for asymmetric distortions across the aperture plane, leading to a lack of ray convergence. To address this issue, LFRB 2023 implements ray bundling in object space by projecting rays through the volume where the calibration polynomial accounts for any encountered optical aberrations, alleviating the potential for distortion-related errors and increasing robustness in the bundling procedure. This alternative approach was deemed particularly of interest in the context of R3DV experiments where the current setup involves the camera viewing the scene through a 1 in thick acrylic interface that is unlikely to be perfectly perpendicular to the aperture, through approximately 2 ft of water introducing a refractive index change, and a 45° inclined mirror. All of these factors have the capacity to introduce errors related to distortion.
Criteria for determining the eligibility of a candidate ray in a bundle include the proximity of the corresponding line of closest approach endpoint to the estimated bundle center position. The limit of this allowable distance is informed by approximations of the uncertainty of plenoptic imaging, as derived by Deem et al (2016). To provide a buffer from which experimental imprecisions may stray from theoretical calculations, this estimate is multiplied by a constant, referred to as core radius and core elongation factors for in-plane and out-of-plane uncertainties, respectively. If either of these limits is exceeded, the candidate ray is considered to have been incorrectly indexed and is subsequently removed from the bundle. To prevent rays from erroneously appearing in multiple bundles, any rays stored in a bundle that passes the aforementioned filters are removed from the pool of candidates. The sorting procedure continues through \((u_1,v_1)\) such that every ray is considered a reference ray before proceeding to consider any remaining ‘unbundled’ rays in subsequent perspective images as reference rays. LFRB 2019 considered all available rays in each perspective as candidates for bundling to a reference ray, leading to prohibitively expensive computations at higher imaging densities. LFRB 2023 addresses this by incorporating an inclusion constraint whereby only rays within a reasonable proximity are considered, thus providing a more efficient means to sort through rays. This search region is limited based on the maximum expected level of parallax, calculated by projecting points from the nearest and farthest depths of the volume onto the two most extreme perspectives before multiplying by a user-defined buffer.

2.1.3 Triangulation

DLFC is applied to each bundle to calculate the corresponding 3D particle position that minimizes the sum of the distances between the coordinate of the particle image \((s_p, t_p)\) detected in each perspective and the reprojected particle positions:
$$\begin{aligned} min\sqrt{(s_p - F(x,y,z,u,v,s_{c}))^{2} + (t_p - F(x,y,z,u,v,t_{c}))^{2}}. \end{aligned}$$
(4)
We use an iterative approach to solving equation 4 similar to Hall et al (2019) where rays exhibiting reprojection errors exceeding three standard deviations above the mean are removed before reapplying equation 4 to generate a refined particle position estimate. The minimization is repeated until either no rays are removed or ten iterations have been reached (note that the iterative triangulation discussed here is a sub-procedure within the greater LFRB loop that iterates through the steps presented in Sect. 2.1). The error analysis of synthetic data found that the extension of triangulation to an iterative scheme was critical to improving particle position accuracy compared to LFRB 2019 which did not perform any further filtering at this stage. This effect became particularly appreciable at higher seeding densities, where peak estimation is often compromised due to interference from neighboring particle images.
Following completion of all iterations of LFRB, a final filtering step removes particles with reprojection errors higher than the sum of the mean and three standard deviations of the reprojection error from the first iteration of LFRB; synthetic testing revealed that higher errors were indicative of duplicate particles arising from the bundling of the residuals of particle images incompletely removed by the technique described in Sect. 2.1: Particle Identification, Reprojection, and Removal. The following subsection will describe the tracking procedure.

2.2 Perspective-based particle tracking

While the standard triangulate-then-track approach commonly used in tomo-PTV has been shown to work sufficiently well in plenoptic imaging experiments at lower particle densities (Fischer et al 2022), as alluded to in the previous section, the unique imaging properties of the plenoptic camera necessitate the development of an algorithm specifically designed to accommodate for higher depth uncertainty relative to the lateral directions. Our PTV technique is illustrated in Fig. 3. The algorithm’s decisions are informed by statistical measures, taking advantage of the multiviewpoint imaging capabilities of the plenoptic camera to track each 2D perspective image while minimizing a cost function comprising both 2D and 3D trajectories before rebuilding the 3D tracks using identifiers assigned during LFRB to couple 3D particles with their corresponding 2D projections. A given particle is linked through 3D space and time to the particle whose projection it is tracked to across the greatest number of perspective views. As a particle’s out-of-plane motion is not considered during tracking, the effects of out-of-plane position estimation errors are suppressed, resulting in an increased likelihood of correct temporal matching. 3D tracks can then be fit to a smooth function that better represents reasonable particle motion. Details of the 2D tracking scheme and rebuilding of tracks in object space are described in the remainder of this section.

2.2.1 Track initialization

The four-frame best estimate-enhanced track initialization technique (4BE-ETI; Clark et al 2019), proven to be particularly effective in turbulent flow applications, was adopted for plenoptic-PTV. For a given particle in a perspective image at an initial time \(\tau\), 4BE-ETI considers all particles within a search window at \(\tau + \Delta \tau\) as potential candidates before predicting positions at subsequent time points by assuming constant velocity and constant acceleration for \(\tau + 2\Delta \tau\) and \(\tau + 3\Delta \tau\), respectively. The four-frame path whose candidate particle is closest to its predicted position at \(\tau + 3\Delta \tau\) is selected by minimizing a cost function, ensuring a smooth track is initialized. 4BE-ETI offers a distinct advantage over employing neighborhood coherency-based techniques; if particle trajectories vary through the depth of an imaged 3D volume, their 2D projections may cross one another, rendering the application of neighborhood coherency in perspective images ineffective. Alterations to the algorithm were made to allow for an initial velocity prediction either defined globally by the user or via local interpolation from PIV-obtained vector fields. Moreover, while tracking is performed in 2D perspective views, the particle image’s corresponding 3D positions are utilized via an additional search window that imposes limits on the maximum in-plane displacement in 3D space. The cost function is also reformed to consider not just the distance between predicted (\(p'\)) and candidate particle positions (\(p_i\)) in 2D, but also the standard deviation (\(\sigma\)) of corresponding in-plane velocities in 3D (\(\vec {u_i}, \vec {v_i}\)):
$$\begin{aligned} J_i = \left\Vert p_i-p' \right\Vert + \sigma _{\vec {u_i}} + \sigma _{\vec {v_i}}. \end{aligned}$$
(5)
Note that the w-velocity component is not explicitly considered during tracking. This is due to the notion that out-of-plane measurements from plenoptic images may exhibit significant random uncertainty, meaning that imposing bounds could lead to an artificial dampening of out-of-plane motion. Nevertheless, due to the camera’s conical field of view, the additional 3D-2C information aids in reducing the likelihood of spurious track initialization, particularly in cases of particle track projections crossing each other in 2D. The incorporation of the out-of-plane velocity to the cost function, possibly in a weighted manner, as well as limits on out-of-plane displacement based on the estimated out-of-plane uncertainty is left for future work.

2.2.2 Track progression

To propagate trajectory estimation over time, initialized tracks are fitted with Kalman filters which are evaluated at subsequent time points. Any particles located within a search box around the prediction point are considered potential candidates. The extent of the search region is typically tailored to accommodate uncertainties in position estimation and prediction errors. If no particles are found within the search box, the track is terminated. Similar to the procedure adopted in track initialization, additional limits are placed based on the maximum expected in-plane displacement in 3D space. To select the most suitable candidate, a cost function is used that penalizes the difference between the predicted position and the candidate particles in 2D. The candidate with the smallest cost is selected, and the Kalman filter state is subsequently updated and the track continued. Future work will explore parallel Kalman filter-based track progression using the particle projection’s corresponding 3D locations in order to alleviate the requirement to set displacement limits in the 3D domain as this may erroneously restrict the dynamic velocity range.

2.2.3 Track bundling

Once the projection of every triangulated particle is tracked in 2D perspective images, particle correspondences are statistically compiled to reconstruct tracks in 3D space. Referring to Fig. 3, consider the scenario where the 2D projection of particle \(a_{\tau = 1}\) at time \(\tau = 1\) is tracked to particles \(b_{\tau = 2}\) in \((u_{-4}, v_0)\) and \(c_{\tau = 2}\) in \((u_0, v_0)\) and \((u_4, v_0)\). In this case, the majority of perspective-based tracks indicate \(a_{\tau = 1}\) corresponds to \(c_{\tau = 2}\); hence, a track is formed in 3D space to connect these two particles through time. Similarly, if the projection of \(c_{\tau = 2}\) is tracked to \(d_{\tau = 3}\) in \((u_{-4}, v_0)\) and \((u_0, v_0)\), as well as \(e_{\tau = 3}\) in \((u_4, v_0)\), particle \(c_{\tau = 2}\) is linked to \(d_{\tau = 3}\). Consequently, \(a_{\tau = 1}\), \(c_{\tau = 2}\), and \(d_{\tau = 3}\) form a 3D track that represents the motion of the particle from time \({\tau = 1}\) to \({\tau = 3}\). This process is repeated until 3D tracks are reconstructed for all particles using their corresponding 2D linkages throughout the entire time domain.
Since 2D tracking is conducted independently for each time series of perspective images, situations may arise where multiple tracks lead to particle images associated with the same 3D particle. This can occur due to factors such as image noise, imperfections in the tracking algorithm, and erroneous bundling in LFRB. To mitigate this phenomenon, priority is given to the track that encompasses the particle in question across the largest number of perspectives, while the remaining tracks are terminated.

2.2.4 Post-processing

Tracks are post-processed using procedures commonly adopted in 3D-PTV. Following from Schanz et al (2016), tracks shorter than four consecutive time frames are assumed to be unreliable and potentially consist of noise and are deleted accordingly. The particle positions are then fitted to three 1D B-splines that better represent the smooth motion of particles in most subsonic flow field experiments. The order of the B-splines is left as an option to the user, although with the out-of-plane triangulation error of plenoptic measurements known to be higher than that of the in-plane, it is highly recommended to perform more aggressive fitting of the out-of-plane component to mitigate unrealistic fluctuations through time. The results generated herein applied a fourth-order B-spline to the in-plane components while B-splines of order equal to the track length were employed in the out-of-plane domain. A further filtering step is performed to remove erroneous measurements by applying Westerweel and Scarano’s universal outlier detection scheme (Westerweel and Scarano 2005), adapted for scattered data by Patel et al (2018) (Janke et al 2020). Finally, to provide a continuous representation of the flow field and simplify the computation of derivative quantities such as vorticity, velocity vectors are interpolated onto an Eulerian grid using Adaptive Gaussian Windowing (AGW; Agüí and Jiménez (1987)).

3 Benchmark data

Four sets of data were used to validate the algorithm presented herein: three synthetic and one experimental, with results compared to those of plenoptic-PIV. The works of Fahringer et al (2015); Johnson et al (2017); Fahringer and Thurow (2018) provide detailed descriptions of the plenoptic-PIV algorithm employed herein. Briefly, plenoptic-PIV adapts the multiplicative algebraic reconstruction technique (MART) for particle field reconstruction, aiming to minimize the residual between perspective images and reprojections of voxel intensities within the volume onto the image plane. Subsequently, cross-correlation is applied to subregions in the reconstructed particle field to estimate displacements through time via an iterative multi-grid scheme (Scarano and Poelma 2009).

3.1 Synthetic datasets

A synthetic plenoptic image generator detailed in Fahringer et al (2015) was used to simulate images of particles captured by a 6600\(\times\)4400 pixel (29MP) Imperx Bobcat B6640 camera with a 471\(\times\)362 hexagonally packed microlens array placed one microlens focal length (308 \(\mu\) \(m\)) ahead of the sensor. The image generator models optical elements (main lens and microlenses) by applying the thin lens assumption as well as affine optics, a plenoptic-adapted extension of conventional linear optics pioneered by Georgiev and Intwala (2006), while ray-transfer matrices propagate thousands of rays from each particle through space. Effects of diffraction are replicated by applying a random number generator of Gaussian distribution to the spatial coordinates of light rays at the microlens and sensor planes, with a standard deviation matching the diffraction-limited spot size. The microlens diameter and pixel pitch were 77 \(\mu\) \(m\) and 5.5 \(\mu\) \(m\), respectively, resulting in each microlens encompassing a 14 pixel diameter. The magnification and focal length of the main lens for the datasets of single particle (SP) and varying particle density (VPD) were set to − 0.5 and 85 mm, respectively, while Synthetic Vortex Ring (SVR) was generated at \(-\)0.6 and 60 mm. Note that the former two datasets were first used in Clifford et al (2019) to compare an earlier implementation of LFRB to MART, where LFRB demonstrated 3\(\times\) improved accuracy at particle densities lower than 0.02 ppμ when compared to MART.

3.1.1 Single particle

The first dataset consists of 1000 images of single 10-μm- diameter particles positioned randomly within a 40\(\times\)40\(\times\)40 mm volume centered around the focal plane. The use of single-particle images allows for the assessment of LFRB and MART’s theoretical upper limits of accuracy. Note that the domain slightly exceeds the theoretical depth of field of the perspective images (36.4 mm), outside of which particle localization accuracy is expected to be compromised. This replicates the difficulty involved with precisely matching the illumination to the optical depth of field in an experimental setting. The particle size was chosen such that the diameter of its projection varied between 2 and 4 pixels, with particles gradually enlarging as the distance to the focal plane increases. This range has been widely accepted as the optimal particle image diameter for PIV applications employing 3-point peak fitting routines (Raffel et al 2018).

3.1.2 Varying particle density

An investigation of the effects of particle overlap on the reconstruction algorithm’s localization accuracy and effective yield was conducted by generating 1000 images of particle densities ranging from 0.001 to 0.2 particles per microlens (ppμ). 10-μm-diameter particles were randomly distributed within a 40\(\times\)40\(\times\)40 mm volume. While the field of view of the simulated camera at the nominal focal plane was 72.6\(\times\)48.4 mm, the measurement volume was constrained to a smaller domain in order to reduce computational demand. Note that due to the sub-sampling involved with the generation of perspective images, ppμ is a direct analogue to ‘particles per pixel’ (ppp) commonly quoted among PIV literature as a measure of imaging density.

3.1.3 Synthetic vortex ring

Images of a 3D Gaussian vortex ring were synthesized to compare the combined efficacy of LFRB/plenoptic-PTV to MART/plenoptic-PIV in reproducing 3D/3C velocity fields. 50-μm-diameter particles seeded a volume of 60\(\times\)40\(\times\)30 mm, and particles were propagated through time with the use of analytical vortex ring equations provided in Fahringer et al (2015). As in the previous datasets, the measurement domain was configured to be marginally longer than the theoretical depth of field of 24 mm. The particle density was held constant at 0.05 ppμ, previously determined by Fahringer and Thurow (2018) as optimal for plenoptic-adapted MART. The balance between particle spacing and average displacement is perhaps the primary factor influencing the effectiveness with which a tracking algorithm can correctly build correspondences over time. Accordingly, Malik et al (1993) introduced the mean particle spacing to displacement ratio, hereafter denoted by \(\zeta\), where the amplification of \(\zeta\) indicates increasingly favorable conditions for tracking. To determine the appropriate guidelines for the plenoptic-PTV algorithm, the circulation strength of the vortex ring was tuned to generate five sets of 100 images, resulting in mean displacements varying from 1.9 to 9.4 mm with corresponding \(\zeta\) ranging between 2.1 and 10.5. Note that \(\zeta = 3.5\) was deemed the most favorable for cross-correlation, with displacements in the vicinity of the vortex ring ranging mostly from 4 to 8 voxels, and thus was selected for processing with plenoptic-PIV.

3.2 Experimental datasets

Time-resolved R3DV measurements that capture the flow field around a rotor were acquired to conduct a performance comparison between plenoptic-PIV and plenoptic-PTV in an experimental setting consisting of significant out-of-plane motion.

3.2.1 Facility

The R3DV facility, illustrated in Fig. 4, consists of a \(1.2\ {\rm m} \times 1.2\ {\rm m} \times 1.2\ {\rm m}\) acrylic water tank that houses a rotor of span 240 mm and chord length 48 mm. The rotor is mounted on a hub secured to a central shaft semi-submersed in the tank. On the hub is a mirror inclined at 45°, relaying the field of view surrounding the rotor to a camera placed underneath the tank. Rotation is achieved by coupling a NEMA 34 stepper motor to the central shaft via a timing pulley and belt configured at a 1:6 gear ratio. A HB6M rotary encoder mounted at the top of the shaft measures the wing’s rotation at a resolution of 5000 pulses per revolution. The tank is surrounded by a T-slotted aluminum frame, anchored onto two concrete walls to dampen vibrations, designed to secure in place the motor, shaft, and imaging assembly. For more details on this facility, refer to Gururaj et al (2021).

3.2.2 R3DV experiments

R3DV employs the use of a high-speed plenoptic camera built in-house (Tan et al 2019). The imaging system consists of a 4096\(\times\)2304 (9.4 megapixel) Phantom VEO 4K 990 L camera mounted to a relay lens assembly housing a microlens array of the same specification as that described for the synthetic camera, all attached to a Nikon main lens of focal length 200 mm. Note that the significantly longer physical distance between the camera sensor and microlens array relative to the microlens focal length of 308 μm is achieved here through 70 mm and 50 mm focal length lenses coupled to form a relay system. A magnification of − 0.45 provided a field of view of \(61.1\times 34.4\ {\rm mm}\) at the nominal focal plane, which approximately coincided with the wing’s radius of gyration, along with a 68.1 mm theoretical depth of field. The experiments were carried out at a Reynolds number of 850, with the wing set to an angle of attack of 45°. Illumination was provided by a LaVision LED-Flashlight 300 emitting blue light with an intensity peak of 445 nm using an array of 72 high-power LEDs. The illuminated volume was centered close to the radius of gyration and synchronized with the camera to trigger at a rate of 300 frames per second. Fluorescent PMMA rhodamine particles of diameter 63–75 μm were used as flow tracers at a seeding density of approximately 0.03 ppμ. The absorption and emission spectra of the particles range 430–565 nm and 590–625 nm, respectively. A Tiffen Orange 21 filter was installed to enhance the camera’s signal-to-noise ratio. A rotational implementation of the volumetric calibration technique presented in 2.1 was developed and described in Gururaj et al (2021). Briefly, a calibration plate comprising circular markers evenly spaced 5 mm apart was suspended to a mounting wing using a series of magnets, cones, and conical holes arranged so that the plate could be traversed in 8 mm increments through the measurement volume. Conventional DLFC is performed for a given azimuth angle while rotation is introduced through repetition of calibration at discrete azimuth angles prior to fitting a Fourier series to the calibration coefficients to build a relationship between the wing’s azimuth angle and DLFC mapping function. Interrogation of the Fourier fit can subsequently be performed to generate calibration functions at intermediate azimuth angles. In total, the calibrated volume encompassed 80 mm in the depth direction throughout a 80° azimuthal sweep.

3.2.3 Stereo-PIV experiments

To validate results from R3DV experiments, stereo-PIV was conducted in the fixed frame of reference at the same Reynolds number and angle of attack, capturing 2D/3C velocity fields at two azimuth angles, \(\psi\), of \(\approx\)8° and \(\approx\)12°. To achieve this, two 2560\(\times\)1600 (4 megapixel) Phantom VEO 640 cameras were placed roughly 25° apart, both attached to 18–105 mm variable focal length lenses and configured to capture images at the instant the rotor became perpendicular to the illumination plane and tank wall, as illustrated in Fig. 5. The starting position of the rotor, shown in dark gray and light gray for \(\psi = 8^{\circ }\) and \(\psi = 12^{\circ }\), respectively, was adjusted accordingly to ensure that the desired angles were obtained at the moment of capture. A Photonics dual-head Nd:YLF laser (527 nm wavelength and 30 mJ/pulse at 1 kHz) illuminated the flow field at the approximate radius of gyration of the wing with a sheet thickness of approximately 2 mm. The volume was seeded with the same particles used in the R3DV experiments at a density of approximately 0.0025 ppp resulting in an average of 10 particles within a 64\(\times\)64 correlation window. Both cameras were fitted with Tiffen Orange 21 filters and operated at a magnification of approximately \(-\)0.19 resulting in a 135\(\times\)84 mm field of view. Camera 2 was equipped with a Scheimpflug adaptor to align its focal plane with respect to the illumination plane. Image acquisition and laser synchronization were achieved with LaVision’s DaVis software with a time interval of 0.04 s between successive images in double-frame mode. Stereo calibration was performed by suspending a two-plane LaVision calibration plate at the measurement plane.

3.3 Processing parameters

Table 1 provides the parameters applied to process the datasets used in this study. Selection of volume resolution for MART was based on ensuring a voxel size smaller than the projected diameter of a microlens. The relaxation parameter controls the convergence rate of MART, with smaller values generally requiring a higher number of iterations.
PIV window sizes were adjusted to the particle density and displacement in each case, ensuring that the largest window exceeded 4\(\times\) maximum expected displacements while the smallest window encompassed 5-10 particles. Stereo-PIV validation experiments were conducted with window sizes of 96 for the first pass and 64 for a successive four passes, including an overlap of 75\(\%\).
All cases processed by LFRB applied the weighted centroiding method to subpixel center estimation. Synthetic testing found negligible differences in accuracy between center of mass and Gaussian schemes. This can be explained by the hexagonal to rectangular interpolation involved with generating perspective images from a hexagonally packed microlens array resulting in particle images that are dissimilar from an ideal Gaussian. The maximum subpixel evaluation window size for R3DV was reduced relative to the synthetic datasets due to noise considerations. Core radius and core elongation factors were chosen empirically to allow for a margin of discrepancy between theoretical and practical calculations of plenoptic uncertainty. Similarly, the search region factor was selected based on minimizing computational cost without incorrectly excluding ray candidates for bundling. The bundling view fraction threshold for R3DV was decreased relative to the synthetic datasets due to the lower quality of perspective images compared to synthetically generated images. Note that erroneous triangulation of duplicate particles can easily be avoided by prudent selection of thresholds for particle identification and ray bundling.
The initial search window factor for PTV refers to the constant by which the maximum expected velocity was multiplied to calculate the size of the search region for the initialization of tracks. Due to the prevalence of noisy measurements in experimental data, the search windows for R3DV, relative to the maximum expected velocity, were reduced in size with the expectation that while the quantity of tracks obtained may decrease, the average track quality should improve. The search region for the propagation of the track after initialization was determined by multiplying the initial search window by a constant called the search window factor. The standard deviation for AGW was set equal to the radius of the smallest PIV window to allow comparative assessments of plenoptic-PIV and PTV on an Eulerian grid.
Table 1
Processing parameters
Processing parameters
SP/VPD
SVR
R3DV
MART
Volume size (mm)
42\(\times\)42\(\times\)46
48\(\times\)36\(\times\)30
50\(\times\)40\(\times\)80
Volume resolution (vox/mm)
7.7
8
8
Number of iterations
5
10
15
Relaxation parameter
1
0.5
0.5
Plenoptic-PIV
Window size (vox)
 
64, 48, 32, 32
64, 48, 40, 40
Window overlap
 
75\(\%\)
75\(\%\)
LFRB
Subpixel estimation scheme
Weighted centroid
Weighted centroid
Weighted centroid
Subpixel window range (px)
11\(\times\)11 to 3\(\times\)3
11\(\times\)11 to 3\(\times\)3
5\(\times\)5 to 3\(\times\)3
Core radius factor
1.25
1.25
1.25
Core elongation factor
1.25
1.75
1.75
Search region factor
1.25
1.25
1.25
View fraction (%)
50%
40%
15%
Plenoptic-PTV
Initial search window factor
 
1.3
1.2
Search window factor
 
0.3
0.3
AGW standard deviation (vox)
 
16\(\times\)16\(\times\)16
20\(\times\)20\(\times\)20

4 Results and discussion

4.1 Synthetic validation

4.1.1 Single particle position uncertainty

Figure 6 plots the absolute 3D error associated with the SP dataset as a function of depth relative to the nominal focal plane for MART, LFRB 2019, and LFRB 2023. Improvements in position estimation provided by LFRB 2023 are prominent throughout the depth domain, with average error reductions of 46.4% and 81.2% observed relative to LFRB 2019 and MART, respectively. Note that the local spike in error toward the focal plane is commonly observed in plenoptic imaging and is characterized by Deem et al (2016). However, Fig. 6 shows that this phenomenon is somewhat less pronounced when applying LFRB 2023 compared to LFRB 2019. All three methods gradually lose accuracy as particles move toward the edges of the volume—to be expected due to the limited depth of field—although LFRB 2023 maintains errors mostly below half a microlens while LFRB 2019 and MART errors reach as far as one and three microlenses, respectively. LFRB 2023 improved performance in single particle position estimation can be attributed to advancements in subpixel peak estimation as well as the iterative filtering procedure employed during triangulation. Figure 7 presents a comparison of errors in the lateral (in-plane) and depth directions. Typical to plenoptic imaging, the depth error across all three methods is observed to be an order of magnitude higher than the lateral error; however, LFRB 2023 exhibits errors notably more concentrated toward the origin as compared to the other techniques. When normalized by the diameter of the microlens projected into object space, MART and LFRB 2019 display average absolute depth errors of 1.26 and 0.44 microlenses, respectively, while LFRB 2023 error corresponds to 0.23 microlenses, demonstrating that the bulk of improvements observed in the latest algorithm occur in the depth domain.

4.1.2 Triangulation performance versus particle number density

Figure 8 plots the average errors found by applying MART and LFRB 2023 to VPD while Fig. 9 shows the percentage of particles identified with respect to the true total. Note that the horizontal axis in both figures is logarithmically scaled. Iterations of LFRB are distinguished by color, where the markers denote average errors for iterations 1, 2, 5, and 10 in Fig. 8 and the cumulative particle yield through the same selection of iterations in Fig. 9. Particles in MART reconstructions were identified by searching for a local maximum intensity within a 5\(\times\)5\(\times\)19 window around the known ground truth positions prior to center estimation through weighted centroiding. The results suggest that LFRB 2023 provides consistent improvements in particle position accuracy up to a concentration of 0.1 ppμ, with the gap between MART and LFRB gradually narrowing as imaging density increases. As particle overlap becomes increasingly prominent, the accuracy with which particle image peaks can be calculated on a subpixel grid drops, manifesting itself in imprecise ray propagation and ultimately leading to compromised bundling and subsequent triangulation. Note that the relationship between accuracy and depth of the VPD dataset mimics that shown in Fig. 6, exhibiting increased uncertainty near the focal plane. The particle yield maintains parity with MART up to a concentration of 0.05 ppμ. The percentage of particles captured can be further increased by reducing the strictness of the bundling thresholds, although likely at the expense of accuracy. At lower concentrations (\(\le\) 0.01 ppμ), LFRB 2023 demonstrates an improvement in accuracy of 76.2% in comparison to MART. At densities typical of current plenoptic-PIV experiments (0.04 ppμ to 0.06 ppμ), LFRB 2023 provides an improvement of 28.5% with just 1.8% less yield. The enhanced performance of LFRB as compared to MART in this region can be ascribed to its consideration of the quality of information available across perspective images during bundling of rays as well as its representation of the particle field in a continuous volume rather than in voxel-space where peak locking issues can arise. MART provides 11.9% better accuracy along with 10.5% increase in particle yield between densities of 0.1 ppμ to 0.2 ppμ. A possible explanation for the drastic change in comparative yields could be rooted in flaws within the method used to identify particles in the MART volumes, i.e., searching within the vicinity of known ground truths may lead to bias toward higher percentages of intensities identified as particles. Nevertheless, the authors believe that LFRB’s tunability with respect to its processing constraints gives it a key advantage in allowing the user to achieve a desired balance between quality and yield based on experimental demands.

4.1.3 Plenoptic-PTV uncertainty quantification

The SVR dataset was generated to investigate the efficacy of the plenoptic-PTV algorithm under various \(\zeta\) conditions while comparing the performance of LFRB/plenoptic-PTV to MART/plenoptic-PIV. For a comprehensive evaluation of the 4BE-ETI algorithm’s performance across a wide range of \(\zeta\), the reader is referred to Clark et al (2019), although note that \(\zeta\) is defined inversely to that presented in this work. The average track length and percentage of particles assigned to tracks after post-processing are presented in Table 2. Both metrics remain relatively constant between \(\zeta = 10.5\) and \(\zeta = 3.5\) before sharply dropping with further decreases in \(\zeta\), suggesting the plenoptic-PTV algorithm struggles to track particles when \(\zeta\) is less than 3.5. In essence, as the average particle displacement relative to spacing increases, ambiguities rise in predicting a particle’s true motion, leading to non-physical tracks that prematurely exit the measurement volume or the tracking scheme searching for particles in empty regions.
Table 2
Average track length and percentage of particles assigned to tracks after post-processing for vortex ring reconstructed with plenoptic-PTV at varying spacing to displacement ratios
\(\zeta\)
Average track length
% assigned particles
10.5
11.0
84.5
5.3
11.4
85.2
3.5
11.1
83.8
2.6
9.9
76.2
2.1
9.1
68.7
To allow for a direct comparison between plenoptic-PIV and PTV, the PTV-generated tracks were interpolated onto the same Eulerian grid as the PIV data. Figure 10 shows iso-surfaces of Q-criterion calculated using the resulting vector fields in addition to the ground truth. Note that a positive Q-criterion indicates relative dominance of vorticity over strain rate and thus has extensively been used for vortex visualization. The ground truth vector field was generated by evaluating the analytical vortex ring equation at the Eulerian grid points of the PIV and interpolated PTV grids. For \(\zeta\) ranging from 3.5 to 10.5, PTV produces a more symmetric and well-defined vortex ring structure with visibly fewer erroneous measurements in the vector field as compared to PIV. The superior performance of plenoptic-PTV becomes particularly apparent when comparing velocities in the depth direction (z) where particle elongation has been known to compromise cross-correlation. At \(\zeta = 2.6\), however, the vortex ring significantly loses its coherence, and at \(\zeta = 2.1\), it is virtually indistinguishable. Analysis of the vector fields showed tracking fails closest to the center of the vortex ring, where velocities become too high for the algorithm to correctly follow particles between consecutive frames. In all cases processed by PTV, the velocity error away from the immediate vicinity of the vortex ring is greatly reduced compared to that of PIV. The enhanced performance of PTV here can be attributed to lower-than-average particle displacements augmenting the local \(\zeta\).
The root-mean-square error (RMSe) and mean bias error (MBe) as a function of \(\zeta\) are displayed in Table 3. RMSe (Eq. 6) represents the standard deviation of errors with respect to the true vector field while MBe (Eq. 7) highlights systematic measurement errors. At a given vector point, these equations can be represented by:
$$\begin{aligned}{} & {} RMSe = \sqrt{\frac{\sum _{t=1}^{N} (U_t - \hat{U})^2}{N}} \end{aligned}$$
(6)
$$\begin{aligned}{} & {} MBe = \frac{\sum _{t=1}^{N} (U_t - \hat{U})}{N} \end{aligned}$$
(7)
where \(U_t\) denotes the measured velocity at time point t, N denotes the total number of time instances, and \(\hat{U}\) is the true velocity. At \(\zeta = 10.5\), the average reduction in RMSe provided by plenoptic-PTV compared to plenoptic-PIV is 65%, 67%, and 89% for the u, v, and w components, respectively. With decreasing \(\zeta\), the RMSe of plenoptic-PTV increases as conditions for tracking become increasingly unfavorable. MBe also worsens, gradually becoming significant with respect to the mean flow velocity at \(\zeta = 2.6\). The dominant velocity component, v (in the y-direction), is underestimated by an average of 14% of the mean v-velocity for both \(\zeta = 2.1\) and \(\zeta = 2.6\), a considerable jump from \(\zeta = 3.5\) which showed 2.1% underestimation. However, compared to plenoptic-PIV, the lateral velocity errors are lower for plenoptic-PTV at \(\zeta = 5.3\) and \(\zeta = 10.5\), similar at the same \(\zeta\) (3.5), and higher as \(\zeta\) is further reduced; however, the velocity in the depth direction—a point of concern for most plenoptic experiments—exhibits significant improvements in accuracy even at low \(\zeta\). The large magnitude reduction in error observed at higher \(\zeta\) suggests that while MART supersedes LFRB in performance toward higher particle densities, a relatively low displacement (if achievable) could allow the combination of LFRB/plenoptic-PTV to extend its working range in terms of seeding density as compared to MART/plenoptic-PIV. To further investigate the relationship between particle displacement and accuracy of the resulting velocity field, the absolute in-plane, u and v, and out-of-plane, w, errors are plotted against velocity in Figs. 11 and 12, respectively. In-plane errors for the PTV-generated vortex ring show a linear relationship with displacement while the PIV case displays mostly constant errors across the range of displacements with the measurement error mostly contained within 3 voxels. Evidently, w velocities tend to be underestimated by both plenoptic-PIV and plenoptic-PTV. Although this is an expected outcome of applying the former technique, the lack of consideration for out-of-plane velocity in the tracking scheme makes systematic bias a curious result in the case of plenoptic-PTV. A probable explanation could point to the Lagrangian to Eulerian velocity interpolation scheme resulting in a low-pass filtering effect similar to that seen in cross-correlation. In addition, the tracking scheme is more likely to identify slower moving particles meaning the relative density of high displacement particle tracks could drop as \(\zeta\) reduces. This issue could propagate into the neighborhood-based track filtering resulting in large velocities to be misinterpreted as outliers. Further investigation is required to clarify the contribution of each of these sources of error.
While the vortex ring for PTV at \(\zeta = 3.5\) appears qualitatively better reconstructed and the average in-plane RMSe values are similar to those of PIV, Fig. 11 shows that higher in-plane displacements are captured more faithfully by PIV. A challenge that arises when applying particle tracking to these data pertains to the compounding effect of higher displacements, which are generally more difficult to track, occurring near the focal plane where local uncertainty is magnified. However, the w error reveals improved accuracy over plenoptic-PIV. At \(\zeta = 5.3\), PTV provides better performance across the complete range of displacements in both lateral and depth directions. Further improvements are seen at \(\zeta = 10.5\) where the majority of displacement error is within a single voxel for both in-plane and out-of-plane velocities. Results of the vortex ring analysis suggest that to maintain comparable lateral errors to PIV while significantly improving accuracy in the depth component, experiments intending to employ the current implementation of plenoptic PTV presented here should be carried out under conditions of \(\zeta \ge 5.3\). The success of other authors in extending the performance of 3D PTV to handle lower spacing to displacement ratios, for example Fu et al (2016) who achieved similar performance to PIV at \(\zeta \ge 2.5\), suggests future efforts in this plenoptic-PTV algorithm have potential to provide appreciable improvements in vector field quality. Further work will also aim to decouple velocity from depth in order to determine the effectiveness of the algorithm’s track filtering and smoothing procedures in mitigating high positional uncertainties at the focal plane.
Table 3
RMSe and mean bias error for vortex ring vector field generated by plenoptic-PTV at varying spacing to displacement ratios and plenoptic-PIV
\(\zeta\)
RMSe (vox)
MBe (vox)
 
u
v
w
u
v
w
10.5
0.049
0.051
0.106
8.85\(\times\)10\(^{-4}\)
\(-\)4.32\(\times\)10\(^{-4}\)
5.25\(\times\)10\(^{-3}\)
5.3
0.085
0.109
0.152
7.08 \(\times\)10\(^{-4}\)
\(-\)2.31\(\times\)10\(^{-3}\)
3.85\(\times\)10\(^{-3}\)
3.5
0.134
0.184
0.208
\(-\)6.63\(\times\)10\(^{-3}\)
2.16\(\times\)10\(^{-2}\)
2.71\(\times\)10\(^{-3}\)
2.6
0.219
0.385
0.319
\(-\)6.95\(\times\)10\(^{-4}\)
1.92\(\times\)10\(^{-1}\)
2.29\(\times\)10\(^{-2}\)
2.1
0.373
0.617
0.460
\(-\)1.18\(\times\)10\(^{-2}\)
2.38\(\times\)10\(^{-1}\)
\(-\)2.26\(\times\)10\(^{-2}\)
PIV
0.139
0.156
0.977
\(-\)8.95\(\times\)10\(^{-3}\)
1.00 \(\times\)10\(^{-2}\)
5.22\(\times\)10\(^{-2}\)

4.2 Experimental validation

Flow over a rotating wing at an angle of attack of \(45^{\circ }\) was captured in an R3DV setting and images were processed using plenoptic-PIV and plenoptic-PTV. Note that the wing is not clearly perceptible in plenoptic images due to the use of a longpass filter. Hence, the following figures in this section merely depict estimates of the wing’s position. Figure 13 shows plots of the Q-criterion after the wing has rotated an azimuth angle of \(8^{\circ }\) from its initial resting position. Both plots strongly indicate the presence of an LEV over the rotor. Noise appears more prominent in the PTV field, albeit predominantly around the edges of the volume where data scarcity is high and Lagrangian to Eulerian vector binning trends toward an extrapolation process rather than interpolation. Efforts are currently underway to improve the data quality by imposing physical constraints on the velocity gridding scheme, thereby restricting the solution space to that of a physically reasonable nature.
Figure 14 shows the surface contour plots of w at \(\psi \approx 8^\circ\) and \(\psi \approx 12^\circ\). All following comparisons between the plenoptic and validation datasets occur along spanwise depths corresponding approximately to the radius of gyration. A high degree of similarity is observed between the in-plane motion captured by plenoptic-PIV and plenoptic-PTV, with both capturing an LEV at \(\psi \approx 8^\circ\) that grows in size while advecting downstream as the wing progresses to \(\psi \approx 12^\circ\). Figure 14b, e shows that PTV revealed the presence of radial velocity between the LEV core and wing surface during the evolution of the LEV. In contrast, plenoptic-PIV failed to yield any discernible component of spanwise velocity. Figure 14c, f shows 2D/3C vector fields obtained using stereo-PIV. It should be noted that there is an order of magnitude difference between the resolution of the stereo-PIV and plenoptic-based measurements, as the latter is limited by the dimensions of the microlens array (471\(\times\)362), resulting in a lower dynamic spatial range. In addition, the stereo-PIV experiments applied a separate calibration procedure to that used in plenoptic-PIV/PTV; thus, the measurement plane was manually shifted using the wing’s leading edge as a point of reference to match the plenoptic data’s coordinate system. While the location of the peaks of radial velocity with respect to the LEV center slightly differs (below the LEV in stereo-PIV vector fields as opposed to aftward of the LEV center in plenoptic-PTV), plenoptic-PTV w-velocities display much better agreement with the validation data compared to plenoptic-PIV. Note that the spanwise velocity is expected to be magnified below the vortex core due to the contribution of Coriolis effects (Jardin 2017). Comparison of the freestream velocity between stereo-PIV and plenoptic-PTV again suggests that the latter suffers from moderate levels of noise.
Figure 15 displays radial velocity profiles extracted from a line drawn roughly perpendicular to the chord at the approximate radius of gyration and intersecting the LEV. Stereo-PIV shows a sharp rise interrupted by a kink at both angles whereby a brief plateau is observed directly above the wing surface, in the region of secondary vorticity that develops in the separated region, before rising to a global peak nearby the LEV. Plenoptic-PIV failed to recover a coherent radial jet at \(\psi \approx 8^\circ\) and shows a weak parabolic profile at \(\psi \approx 12^\circ\), with w an order of magnitude less than the wingtip velocity, suggesting that experimental conditions magnify the underestimation issues already observed in synthetic plenoptic-PIV data. In contrast, plenoptic-PTV revealed a parabolic profile, with a maximum velocity of 7.9 mm/s at \(\psi \approx 8^\circ\) and 14.8 mm/s at \(\psi \approx 12^\circ\) just below the vortex core corresponding, respectively, to 33.3% and 62.5% of the wingtip velocity. These values show reasonable consistency with the peaks obtained from stereo-PIV (12.4 mm/s at \(\psi \approx 8^\circ\) and 16.5 mm/s at \(\psi \approx 12^\circ\)). In addition, referencing the estimated \(\zeta\) of 5.8 with respect to the SVR analysis suggests that approximately 84-89% better accuracy can be expected compared to plenoptic-PIV if, albeit crudely, differences in magnification, noise, and experimental inaccuracies are neglected. However, likely due to differences in achievable vector resolution, PTV-obtained velocity profiles appear to attenuate some of the finer details close to the wing when compared to their stereo-PIV counterpart. Again, a non-window-based vector interpolation technique could resolve this discrepancy by providing higher fidelity results. However, previous studies have reported that the magnitudes of the radial velocity around the LEV are on the order of the wingtip velocity (Birch and Dickinson 2001; Harbig et al 2013; Medina and Jones 2016; Wojcik and Buchholz 2014). Thus, plenoptic-PTV measurements are significantly closer to expectations than those obtained from plenoptic-PIV and it is evident that the former better mimics general trends observed in out-of-plane measurements from the validation data.
As a preliminary exploration of the effects of phase-averaging, both stereo and plenoptic experiments were repeated ten times. The resulting PIV vector fields, shown in Fig. 16 with corresponding radial profiles in Fig. 17, were averaged across all instances while PTV tracks were accumulated to increase the track density prior to vector gridding. Plenoptic-PIV continues to exhibit heavily attenuated radial velocities, suggesting that the technique’s acquisition of out-of-plane motion does not benefit from phase-averaging. In contrast, a notable difference between both plenoptic-PTV and stereo-PIV data in Figs. 14 and 16 is the reduced ‘patchiness’ of out-of-plane velocity contours in the latter, particularly toward the freestream. While this may indicate noise attenuation, analyzing the radial velocity profiles obtained from stereo-PIV in Fig. 17 reveals the loss of a spanwise velocity plateau close to the wing surface, observed in instantaneous data (Fig. 15). Examination of various repetitions of the experiment suggested that this flow feature exists across several runs, but its peak magnitude changes location along the chord. Thus, the number of repetitions used for phase averaging here is likely insufficient to achieve statistical convergence. Indeed, the difficulty in obtaining a converged mean flow field is a primary motivation behind developing a technique capable of accurately providing time-resolved 3D/3C measurements within the rotating frame of reference. Nevertheless, the presence of radial velocity around the LEV is the primary persisting feature between different repetitions of PTV data, resulting in a distinct radial jet in the phase-averaged vector fields and confirming that the observance of spanwise flow in instantaneous measurements was faithful to the physical flow phenomena surrounding an LEVs. In addition, Fig. 17 shows a closer match between the results of plenoptic PTV and stereo-PIV, although, likely due to flow pattern differences between instances of capture, the peak radial velocity is slightly less than in Fig. 15, with corresponding values of 7.1 mm/s and 11.5 mm/s at \(\psi \approx 8^\circ\) and \(\psi \approx 12^\circ\), respectively, for plenoptic-PTV as well as 9.2 mm/s and 14.9 mm/s at \(\psi \approx 8^\circ\) and \(\psi \approx 12^\circ\) for stereo-PIV.
The experimental results discussed here substantiate the inferences made from the synthetic performance evaluation, particularly as it pertains to resolving depth velocities more accurately, thus increasing confidence in the applicability of plenoptic-PTV to R3DV as well as other low-magnification plenoptic flow diagnostic experiments with pertinent out-of-plane velocities.

5 Conclusions

A perspective-based particle tracking algorithm was developed to overcome the limitations associated with relatively large out-of-plane uncertainties inherent to plenoptic imaging. The method exploits the light field imaging principle of perspective view generation by solving particle correspondences through time using their 2D projections across multiple viewpoints. The 4BE-ETI method is employed for track initialization while Kalman filters are adopted to guide ensuing track progression. Uncertainties in particle positioning are subsequently filtered via fitting of tracks to B-splines. Further post-processing involves removal of spurious measurements, utilizing a neighborhood-based outlier identification scheme, and application of AGW for Gaussian-weighted interpolation of scattered velocities onto an Eulerian grid. Additionally, multiple enhancements were made to the LFRB technique first presented by Clifford et al (2019), including implementation of a peak evaluation scheme that adapts to the local seeding density, projection of rays into object space to account for distortions prior to bundling, limiting the search region for corresponding rays to reduce computational expense, iterative filtering of rays based on reprojection error to improve particle position estimation, and addition of a Gaussian-based particle removal technique.
The aforementioned developments to LFRB enhanced the limits of the plenoptic camera’s 3D particle triangulation capabilities, boasting improvements in accuracy of 81.2% and 46.4% with respect to MART and an earlier implementation of LFRB, respectively. At modest seeding densities (\(\approx\) 0.05 \(pp\mu\)), particle field reconstructions generated using LFRB 2023 exhibit 28.5% higher accuracy than MART. Synthetic vortex ring experiments demonstrated the potential of plenoptic-PTV to improve vector field quality relative to plenoptic-PIV. With an average spacing to displacement ratio of 5.3, both in-plane and out-of-plane displacement errors were reduced throughout the range of velocities, with the in-plane directions showing moderately lower average RMSe values of 38.8% and 30.1% while the out-of-plane RMSe dropped by 84.4% compared to plenoptic-PIV. The contribution of the tracking scheme toward this error drop revolves around its ability to acquire a particle’s temporal history in 3D space without allowing out-of-plane uncertainties to compromise temporal correspondence, paving the way for aggressive filtering that attenuates depth-wise noise while biasing tracks toward more physically reasonable motion. The authors presume that PIV techniques that incorporate temporal information would be inferior to tracking individual particles as the correlation of motion through the temporal domain would, like dual-frame cross-correlation, continue to suffer from the averaging of random uncertainties leading to dampening of the w-velocity. Applying plenoptic-PTV to R3DV measurements of a rotating wing revealed a systematic root-to-tip velocity in agreement with observations made in prior literature. The velocity profile of this radial flow captured at two different azimuth angles exhibited a much closer behavior and peak magnitude to profiles acquired from a stereo-PIV experiment used for validation than those obtained via plenoptic-PIV. While previous work had reported plenoptic-PIV to suffer from 5\(\times\) worse accuracy in the out-of-plane direction as compared to the in-plane (Fahringer et al 2015), both synthetic and experimental results demonstrate plenoptic-PTV’s effectiveness in reducing depth uncertainty to the same order as the lateral uncertainty.
Further work would benefit from exploring the effects of magnification on the plenoptic-PIV/PTV performance comparison. It is expected that higher magnification experiments will benefit less from transitioning to particle tracking; as uncertainty is reduced, noisy tracks trend toward smooth profiles while cross-correlation becomes better able to capture out-of-plane motion. Nevertheless, an in-depth analysis would provide users with further guidelines on the preferred algorithm that suits their experimental needs. Several aspects of the algorithm remain under development with the goal of extending the scheme’s capabilities to lower spacing to displacement ratios, focusing on improvements to particle motion prediction, optimization of the tracking cost function, and post-processing of tracks. In addition, efforts are underway to incorporate physics-based constraints, such as incompressibility, to regularize the Lagrangian to Eulerian interpolation of the vector field while accounting for the plenoptic camera’s tendency to suffer from heavily anisotropic noise. Namely, physics-informed neural networks (PINNs) have proved very effective with sparse input data (Clark Di Leoni et al 2023), potentially allowing plenoptic-PTV experiments to operate in the range of particle densities in which LFRB yields reasonable errors without compromising data resolution.

Acknowledgements

This research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-19-1-0052 and W911NF-19-1-0124 (DURIP) monitored by Dr. Jack Edwards. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

Declarations

Conflict of interest

The authors report no competing interests or conflict of interest that are relevant to the content of this article.

Ethics approval

Not applicable
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
go back to reference Bajpayee A, Techet A (2013) 3d particle tracking velocimetry (ptv) using high speed light field imaging. In: PIV13; 10th international symposium on particle image velocimetry, Delft, The Netherlands, July 1–3, 2013 Bajpayee A, Techet A (2013) 3d particle tracking velocimetry (ptv) using high speed light field imaging. In: PIV13; 10th international symposium on particle image velocimetry, Delft, The Netherlands, July 1–3, 2013
go back to reference Birch JM, Dickinson MH (2001) Spanwise flow and the attachment of the leading-edge vortex on insect wings. Nature 412(6848):729–733CrossRef Birch JM, Dickinson MH (2001) Spanwise flow and the attachment of the leading-edge vortex on insect wings. Nature 412(6848):729–733CrossRef
go back to reference Chen H, Sick V (2016) Volume-resolved gas velocity and spray measurements in engine applications. In: 12th international symposium on combustion diagnostics, 2016, Baden-Baden, Germany, May 11–12, pp 80–89 Chen H, Sick V (2016) Volume-resolved gas velocity and spray measurements in engine applications. In: 12th international symposium on combustion diagnostics, 2016, Baden-Baden, Germany, May 11–12, pp 80–89
go back to reference Clark Di Leoni P, Agarwal K, Zaki TA et al (2023) Reconstructing turbulent velocity and pressure fields from under-resolved noisy particle tracks using physics-informed neural networks. Exp Fluids 64(5):95CrossRef Clark Di Leoni P, Agarwal K, Zaki TA et al (2023) Reconstructing turbulent velocity and pressure fields from under-resolved noisy particle tracks using physics-informed neural networks. Exp Fluids 64(5):95CrossRef
go back to reference Clifford C, Tan Z, Hall E, et al (2019) Particle matching and triangulation using light-field ray bundling. In: 13th international symposium on particle image velocimetry, Munich, Germany, 22–24 July 2019 Clifford C, Tan Z, Hall E, et al (2019) Particle matching and triangulation using light-field ray bundling. In: 13th international symposium on particle image velocimetry, Munich, Germany, 22–24 July 2019
go back to reference Doh DH, Cho GR, Kim YH (2012) Development of a tomographic PTV. J Mech Sci Technol 26:3811–3819CrossRef Doh DH, Cho GR, Kim YH (2012) Development of a tomographic PTV. J Mech Sci Technol 26:3811–3819CrossRef
go back to reference Georgiev T, Intwala C (2006) Light field camera design for integral view photography. Adobe System Inc, Technical report Georgiev T, Intwala C (2006) Light field camera design for integral view photography. Adobe System Inc, Technical report
go back to reference Liu H, Zhou W, Cai X, et al (2019) Experimental research on 3D particle tracking velocimetry based on light field imaging Liu H, Zhou W, Cai X, et al (2019) Experimental research on 3D particle tracking velocimetry based on light field imaging
go back to reference Malik N, Dracos T, Papantoniou D (1993) Particle tracking velocimetry in three-dimensional flows: Part II: particle tracking. Exp Fluids 15:279–294CrossRef Malik N, Dracos T, Papantoniou D (1993) Particle tracking velocimetry in three-dimensional flows: Part II: particle tracking. Exp Fluids 15:279–294CrossRef
go back to reference Marxen M, Sullivan P, Loewen M et al (2000) Comparison of Gaussian particle center estimators and the achievable measurement density for particle tracking velocimetry. Exp Fluids 29(2):145–153CrossRef Marxen M, Sullivan P, Loewen M et al (2000) Comparison of Gaussian particle center estimators and the achievable measurement density for particle tracking velocimetry. Exp Fluids 29(2):145–153CrossRef
go back to reference Nobach H, Honkanen M (2005) Two-dimensional Gaussian regression for sub-pixel displacement estimation in particle image velocimetry or particle position estimation in particle tracking velocimetry. Exp Fluids 38:511–515CrossRef Nobach H, Honkanen M (2005) Two-dimensional Gaussian regression for sub-pixel displacement estimation in particle image velocimetry or particle position estimation in particle tracking velocimetry. Exp Fluids 38:511–515CrossRef
go back to reference Nobes DS, Chatterjee O, Setayeshgar A (2014) Plenoptic imaging for 3d\(\mu\)ptv investigations of micro-scale flows. In: 17th International symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 7–10 Nobes DS, Chatterjee O, Setayeshgar A (2014) Plenoptic imaging for 3d\(\mu\)ptv investigations of micro-scale flows. In: 17th International symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 7–10
go back to reference Patel M, Leggett SE, Landauer AK et al (2018) Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields. Sci Rep 8(1):1–14CrossRef Patel M, Leggett SE, Landauer AK et al (2018) Rapid, topology-based particle tracking for high-resolution measurements of large complex 3D motion fields. Sci Rep 8(1):1–14CrossRef
go back to reference Raffel M, Willert CE, Scarano F et al (2018) Particle image velocimetry: a practical guide. Springer, BerlinCrossRef Raffel M, Willert CE, Scarano F et al (2018) Particle image velocimetry: a practical guide. Springer, BerlinCrossRef
go back to reference Scarano F, Poelma C (2009) Three-dimensional vorticity patterns of cylinder wakes. Exp Fluids 47:69–83CrossRef Scarano F, Poelma C (2009) Three-dimensional vorticity patterns of cylinder wakes. Exp Fluids 47:69–83CrossRef
go back to reference Takehara K, Etoh T (1998) A study on particle identification in PTV particle mask correlation method. J Vis 1(3):313–323CrossRef Takehara K, Etoh T (1998) A study on particle identification in PTV particle mask correlation method. J Vis 1(3):313–323CrossRef
go back to reference Westerweel J, Scarano F (2005) Universal outlier detection for PIV data. Exp Fluids 39(6):1096–1100CrossRef Westerweel J, Scarano F (2005) Universal outlier detection for PIV data. Exp Fluids 39(6):1096–1100CrossRef
Metadata
Title
Improving depth uncertainty in plenoptic camera-based velocimetry
Authors
Mahyar Moaven
Abbishek Gururaj
Vrishank Raghav
Brian Thurow
Publication date
01-04-2024
Publisher
Springer Berlin Heidelberg
Published in
Experiments in Fluids / Issue 4/2024
Print ISSN: 0723-4864
Electronic ISSN: 1432-1114
DOI
https://doi.org/10.1007/s00348-024-03780-6

Other articles of this Issue 4/2024

Experiments in Fluids 4/2024 Go to the issue

Premium Partners