Skip to main content
Erschienen in: Experiments in Fluids 1/2020

Open Access 01.01.2020 | Research Article

Calibration of multiple cameras for large-scale experiments using a freely moving calibration target

verfasst von: K. Muller, C. K. Hemelrijk, J. Westerweel, D. S. W. Tam

Erschienen in: Experiments in Fluids | Ausgabe 1/2020

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Obtaining accurate experimental data from Lagrangian tracking and tomographic velocimetry requires an accurate camera calibration consistent over multiple views. Established calibration procedures are often challenging to implement when the length scale of the measurement volume exceeds that of a typical laboratory experiment. Here, we combine tools developed in computer vision and non-linear camera mappings used in experimental fluid mechanics, to successfully calibrate a four-camera setup that is imaging inside a large tank of dimensions \(\sim 10 \times 25 \times 6 \; \mathrm {m}^3\). The calibration procedure uses a planar checkerboard that is arbitrarily positioned at unknown locations and orientations. The method can be applied to any number of cameras. The parameters of the calibration yields direct estimates of the positions and orientations of the four cameras as well as the focal lengths of the lenses. These parameters are used to assess the quality of the calibration. The calibration allows us to perform accurate and consistent linear ray-tracing, which we use to triangulate and track fish inside the large tank. An open-source implementation of the calibration in Matlab is available.

Graphic abstract

Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

New studies in biophysics and fluid mechanics require the quantitative imaging of large-scale field experiments. Such studies include the large-scale Lagrangian tracking of bats and bird flocks (Theriault et al. 2014; Attanasi et al. 2015), super-large-scale particle image velocimetry measurements using natural snowfall (Toloui et al. 2014) and recent advancements in tomographic-PIV (Jux et al. 2018).
Obtaining reliable two- and three-dimensional imaging data in these large-field experiments is challenging and requires a camera calibration that is accurate down to the smallest physical length scale of interest. Non-linear polynomial camera mappings (Soloff et al. 1997; Wieneke 2005, 2008) are often used in laboratory experiments (Kühn et al. 2010), but their application at length scales beyond that of the laboratory is, in practice, limited. First, the size of the calibration target is limited, such that it only covers a small portion of the measurement. Second, the absence of conventional laboratory equipment, providing access to the measurement volume, does not support the accurate spatial positioning of the target, required in conventional calibration procedures.
In the present work, we combine the pinhole camera model (Tsai 1987) with non-linear polynomial camera mappings used in experimental fluid mechanics (Soloff et al. 1997) to perform a multiple camera calibration over a large-scale measurement volume inside the tank of the aquarium located in the Rotterdam zoo. Our method integrates the use of the pinhole camera model with a non-linear camera mapping to correct for optical distortion across refractive interfaces (Belden 2013). Our approach uses the framework of projective geometry in computer vision (Hartley and Zisserman 2004) and apply advanced self-calibration techniques (Svoboda et al. 2005; Shen et al. 2008).
Here we apply the planar checkerboard calibration technique by Zhang (2000), see also Zhang (1998, 1999), Sturm and Maybank (1999), Menudet et al. (2008) and Bouguet (2015). This approach eliminates the need to accurately position the calibration target, as required in conventional calibration procedures. Instead, the checkerboard calibration target is moved to arbitrary and unknown positions and orientations, here with the help of a team of divers. Second, by sequentially acquiring multiple calibration images while freely moving the calibration target, we achieve a camera calibration that spans over length scales much larger than the calibration target itself. This approach yields an accurate calibration over the measurement volume with a characteristic length scale on the order of several tens of meters.
We process the camera calibration in steps (Heikkilä and Silvén 1997). First, we correct for optical distortions (Fryer and Brown 1986) by rectifying the curved lines of the checkerboard images (Prescott and McLean 1997; Devernay and Faugeras 2001). Second, we perform a calibration based on a single view for each camera following (Zhang 2000). Finally, we combine the single views and find the positions and orientations of the cameras over multiple views (Geiger et al. 2012) and optimize the calibration for spatial accuracy and consistency between the different views.
The camera calibration yields accurate results. To assess the validity of the camera calibration, we compare the estimated effective focal length obtained from it against the true value of the focal length of the lenses. We quantify the spatial accuracy of the camera calibration, by computing the skewness of optical rays associated with the multiple views. Our calibration allows us to use linear ray-tracing (Hedrick 2008) to track and triangulate multiple fish swimming over the entire visual depth of the tank. The method is versatile and can be implemented in field experiments over large length scales and for measurement volumes that are challenging to access experimentally.

2 Camera setup and calibration procedure

We image inside the large tank of the aquarium in the Rotterdam zoo in a measurement volume of dimensions \(\sim 10 \times 25 \times 6 \; \mathrm {m}^3\) (Fig. 1). We use four 5.5 megapixel sCMOS cameras with wide-angle lenses of focal length \(f_{\mathrm {lens}}=24 \; \mathrm {mm}\) (Nikkor AF-\(24 \; \mathrm {mm}\)) that cause significant variations in magnification over the large depth-of-field of \(\mathrm {DOF} \approx 25 \; \mathrm {m}\) (Adrian and Westerweel 2011).
The camera setup is positioned behind an acrylic window of thickness \(\sim 50 \ \mathrm {cm}\). Optical distortions in the image plane are due to refraction at the water (\(n_{\mathrm {water}} = 1.363\)) /acrylic (\(n_{\mathrm {acryl}}=1.51\)) and the acrylic/air (\(n_{\mathrm {air}}=1.0003\)) interfaces (Sedlazeck and Koch 2012), where n is the refractive index. The optical access is limited to the acrylic window, which constrains the spacing between the cameras to \(\varDelta H \approx 1 \; \mathrm {m}\) in height and \(\varDelta W \approx 6 \; \mathrm {m}\) in width, and limits the relative angles between the cameras from \(5{^\circ }\) to \(20{^\circ }\) (Fig. 1).
To calibrate the camera setup we image a planar checkerboard calibration target of dimensions \(1.5 \times 1.8 \ \mathrm {m}^2\) with \(5 \times 6\) tiles (Geiger et al. 2012) that are of area \(A_{\mathrm {tile}} = 30 \times 30 \; \mathrm {cm}^2\). This calibration target is moved within the aquarium by a team of divers that swim with the checkerboard under arbitrary and unknown positions and orientations throughout the aquarium.

2.1 Image processing

The different images of the calibration target are processed to identify the curves and the nodes corresponding to the gridlines and intersections between the tiles of the checkerboard (Fig. 2). In our application, using a checkerboard calibration target is advantageous over using a pattern of dots. This is because the image gradient obtained from a checkerboard determines grid points more accurately and more robustly over the large depth-of-field of our experiment.
The calibration images are converted to the image gradient using a Savitsky–Golay image differentiation approach (Meer and Weiss 1992) to mark locations on the gridlines between the tiles. For each image, we then fit a set of polynomial curves \(\gamma _i(t)\) to the \(I=9\) gridlines between the tiles of the checkerboards using the local intensity values from the image gradient. These fitting curves are written as \(\gamma _i(t)=\sum _k {\mathbf {a}}^k_i t^{k-1}\), where \({\mathbf {a}}^k_i\) are two-dimensional vectors. Here the parameter t varies within the interval \(t \in [0 , 1]\) over the checkerboard image, such that \(\gamma _i(t=0)\) and \(\gamma _i(t=1)\) correspond to the beginning and end-points of the gridline of the checkerboard image, see the lines on Fig. 2. At last, we find the \(J=20\) intersections between all the gridlines as a set of nodes \({\mathbf {x}}_{j}\) in the image plane of each camera. Here \({\mathbf {x}}_{j}=[x_j \ y_j ]^T\) with \((\bullet )^T\) the vector transpose, where the numbering \(j=1 \cdots J\) is consistent between the different camera views (Geiger et al. 2012) (Fig. 2).

2.2 Distortion correction

Following the path of an optical ray for a single camera in Fig. 2, the linear path is refracted across the air/water interface (Belden 2013). This causes optical distortions in the image plane for each camera and, therefore, we rectify the image plane by dewarping the optical distortion. The coordinates \({\mathbf {x}}=[x \; y]^T\) in the image plane are mapped to distortion-corrected image coordinates \(\hat{{\mathbf {x}}}=[{\hat{x}} \; {\hat{y}}]^T\) by determining a distortion mapping \(\hat{{\mathbf {x}}}=m({\mathbf {x}})\). This mapping ensures that collinear points in the object domain are projected as collinear points in the dewarped image plane and, therefore, supports linear ray tracing. For moderate optical distortions, a polynomial distortion map (Soloff et al. 1997) is sufficient. In this study, we write the distortion map as
$$\begin{aligned} \begin{aligned} \hat{{\mathbf {x}}}&= {\mathbf {x}} + \sum _k {\mathbf {c}}_k \phi ^k( {\mathbf {x}} ) = {\mathbf {x}} + {\mathbf {c}}_1 x^2 + {\mathbf {c}}_2 xy + {\mathbf {c}}_3 y^2 \\&\quad + {\mathbf {c}}_4 x^3 + {\mathbf {c}}_4 x^2y + {\mathbf {c}}_5 xy^2 + {\mathbf {c}}_6 y^3. \end{aligned} \end{aligned}$$
(1)
The distortion map is determined by rectifying the curved gridlines in the N original calibration images. We follow Devernay and Faugeras (2001) and minimize the percentage of deflection along the gridlines. We consider the nodes \({\mathbf {x}}_{j}^n\) along a particular gridline \(\gamma _i^n(t)\) and their images \(\hat{{\mathbf {x}}}_{j}^n\) and \({\hat{\gamma }}_i^n(t)\) in the dewarped image plane through the map \(m({\mathbf {x}})\). We fit a straight line \({\hat{\ell }}_i^n\) through the nodes \(\hat{{\mathbf {x}}}_{j}^n\) and compute the point-line distance \(d(\hat{{\mathbf {x}}}_{j}^n,{\hat{\ell }}_{i}^n)\). The parameters \({\mathbf {c}}_k\) defining the distortion map are then determined by solving a minimization problem over all gridlines in all calibration images:
$$\begin{aligned} \min _{{\mathbf {c}}_k} \sum _{i,n} \sum _{ \begin{array}{c}\mathrm {nodes} j \\ \mathrm {along} {\hat{\gamma }}_i \end{array} } \frac{d(\hat{{\mathbf {x}}}_{j}^n,{\hat{\ell }}_{i}^n)^2}{\left\| {\hat{\gamma }}_i^n(1)-{\hat{\gamma }}_i^n(0) \right\| ^2}. \end{aligned}$$
(2)
This minimization problem can be solved efficiently using a steepest descend algorithm and numeric integration techniques described in Boyd and Vandenberghe (2004). This approach directly extends to larger optical distortions requiring more elaborate distortion models (see Appendix A for the air/water interface and Supplementary-Fig. 8). An example of a dewarped image can be found in Fig. 2.

2.3 Camera calibration and projective geometry

We consider a physical point in the object domain \({\mathbf {X}}\) of coordinates \({\mathbf {X}}=[X \; Y \; Z]^T\) in a world coordinate system and its projected image \(\hat{{\mathbf {x}}} = [{\hat{x}} \; {\hat{y}}]^T\) in the dewarped image plane of a single camera. The calibration is defined by the mapping function F, such that \(\hat{{\mathbf {x}}} = F({\mathbf {X}})\). Our method uses the framework of projective geometry to express the mapping function F and implicitly assumes a pinhole camera model. In the following, we outline the main notations used in projective geometry; a more complete introduction can be found in Hartley and Zisserman (2004).
We make use of augmented vectors to represent points in both the image plane and in the object domain. The coordinates in the dewarped image plane \(\hat{{\mathbf {x}}}\) are augmented to the ray-tracing vector \(\tilde{{\mathbf {x}}}\) such that \(\tilde{{\mathbf {x}}} = [k{\hat{x}} \; k{\hat{y}} \; k]^T\), where k is a scaling parameter in the direction of the principal optical axis. The associated inverse function that projects \(\tilde{{\mathbf {x}}}\) back to \(\hat{{\mathbf {x}}}\) is defined as the projection \(p(\tilde{{\mathbf {x}}}) =[{\tilde{x}} \; {\tilde{y}}]^T/k=\hat{{\mathbf {x}}}\). Similarly, the world coordinates \({\mathbf {X}}\) are augmented to a homogeneous vector as \(\tilde{{\mathbf {X}}}=[X \; Y \; Z \; 1]^T\) (Hartley and Zisserman 2004). Using augmented vectors a geometric transformation, consisting of a rotation and a translation, is simply written as a matrix multiplication \([R \ {\mathbf {t}}] \tilde{{\mathbf {X}}}\), where R is a rotation matrix and \({\mathbf {t}}\) is a translation vector.
With these notations, the mapping function can be written in the following form that is widely used in projective geometry:  (Hartley and Zisserman 2004):
$$\begin{aligned} \hat{{\mathbf {x}}} = F({\mathbf {X}}) = p( K [R \ {\mathbf {t}}] \tilde{{\mathbf {X}}}). \end{aligned}$$
(3)
Here K is the \(3 \times 3\) camera calibration matrix and \([R \ {\mathbf {t}}]\) is a \(3 \times 4\) matrix, with R the \(3 \times 3\) rotation matrix, and \({\mathbf {t}}\) the \(3 \times 1\) translation vector. The matrix K in Eq. 3 has the following form:
$$\begin{aligned} K=\begin{bmatrix} \alpha _x & \quad s& \quad p_x \\ 0& \quad \alpha _y& \quad p_y \\ 0& \quad 0& \quad 1 \end{bmatrix}, \end{aligned}$$
(4)
where \(\alpha _x= f r_x\) and \(\alpha _y=f r_y\) are scale factors, with f the focal length of the lens in \(\mathrm {mm}\) and \(r_x\) and \(r_y\) the pixel pitch of the sCMOS sensor in \((\mathrm {px}/ \mathrm {mm})\). s is the pixel skew, characterizing the angle between the x and the y pixel axes, and \([p_x \ p_y]^T\) are the coordinates of the principal point at the intersection between the optical axis and the dewarped image plane (Hartley and Zisserman 2004). The elements of K are often referred to as the intrinsic camera parameters, representing the characteristic properties of the camera itself (Hartley and Zisserman 2004), while \([R \ {\mathbf {t}}]\) are referred to as the extrinsic parameters representing the position of the camera with respect to the world coordinate system (Fig. 2).
Together, K, R, \({\mathbf {t}}\) define the mapping function F of Eq. 3 and have to be determined for each of the cameras separately. In the following, we use the superscript \(c=1 \cdots 4\) and the notations \(K^{\text {c}}\), \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\), when we distinguish explicitly between the different cameras. We omit the superscript for clarity when no distinction between the cameras is needed; the details of the algorithms can be found in Appendices.

2.4 Single-camera calibration

First, the camera matrix K is determined for each of the four separate cameras by calibrating a single camera using the method developed by Zhang (2000). We consider a local coordinate system \({\mathbf {X}}^o=[X^o \; Y^o \; Z^o]^T\) attached to the planar checkerboard in the object domain, where \(Z^o=0\) corresponds to the plane of the checkerboard. In this coordinate system, the J nodes at the intersections between the gridlines have known coordinates \({\mathbf {X}}^o_j=[X_j^o \ Y_j^o \ 0]^T\). The nodes \({\mathbf {X}}^o_j\) are mapped to their images \(\hat{{\mathbf {x}}}_j^n\) following the formalism used in Eq. 3. This transformation can be written as \(p( K [R^n \ {\mathbf {t}}^n] \tilde{{\mathbf {X}}})^o_j\), where the rotation matrix \(R^n\) and translation vector \({\mathbf {t}}^n\) characterize the position of the checkerboard in the object domain. Geometrically, this corresponds to a rotation and translation of the checkerboard plane in the object domain, followed by a projection on the image plane. The camera calibration matrix K and the positioning of the checkerboard, characterized by \(R^n\) and \({\mathbf {t}}^n\), are determined following Zhang (2000) and are refined by minimizing the following functional:
$$\begin{aligned} \min _{K, R^n, {\mathbf {t}}^n} \sum _{j,n} \left\| \hat{{\mathbf {x}}}^n_j - p( K [R^n \ {\mathbf {t}}^n] \tilde{{\mathbf {X}}}_j^o ) \right\| ^2 . \end{aligned}$$
(5)

2.5 Multiple-camera calibration

To complete the calibration over the multiple cameras, we determine the rotation \(R^{\text {c}}\) and the translation \({\mathbf {t}}^{\text {c}}\) representing the position of each of the cameras in the world coordinate system. \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) correspond to the extrinsic camera parameters described in Sect. 2.3 and a first estimate is deduced directly from the calibration of single cameras performed in the previous step. Selecting two different calibrated cameras, we use the Kabsch algorithm (Kabsch 1976) to estimate the relative positions of the two cameras by comparing the positions of the checkerboards, \(R^{n,{\text {c}}}\) and \({\mathbf {t}}^{n,{\text {c}}}\), for these two views. Considering all camera pairs, we determine the relative positions between all the views and deduce a first estimate for \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\); see Appendix B for detail. Next, for each of the N calibration images, we estimate position \(R^n\) and \({\mathbf {t}}^n\) of the checkerboard in the world coordinate system. We do this by averaging the position estimates \(R^{n,{\text {c}}}\) and \({\mathbf {t}}^{n,{\text {c}}}\) obtained from the four separate single-camera calibrations.
In the last step, we compute the final values for \(K^{\text {c}}\), \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) by minimizing the reprojection errors from all cameras and calibration images. For this optimization step, we use as initial conditions, the camera matrices \(K^{\text {c}}\) obtained from Sect. 2.4, the positions \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) obtained from the Kabsch algorithm and the estimates of the checkerboard positions \(R^n\) and \({\mathbf {t}}^n\). We define the reprojection error \(\varepsilon ^{n,{\text {c}}}_j\) for each of the N calibration images and in each camera view as
$$\begin{aligned} \varepsilon ^{n,{\text {c}}}_j = \frac{1}{\sqrt{ \hat{{\mathcal {A}}}^{n,{\text {c}}}_j}} \left\| \hat{{\mathbf {x}}}^{n,{\text {c}}}_j - p\left( K^{\text {c}} [R^{\text {c}} \ {\mathbf {t}}^{\text {c}}] \begin{bmatrix} R^n&{\mathbf {t}}^n \\ {\mathbf {0}}^T&1 \end{bmatrix} \tilde{{\mathbf {X}}}_j^o \right) \right\| . \end{aligned}$$
(6)
The reprojection error \(\varepsilon ^{n,{\text {c}}}_j\) is normalized by \(\hat{{\mathcal {A}}}^{n,{\text {c}}}_j\), which is the area in pixels of a single tile on the checkerboard projection in the dewarped image plane (see Appendix C). Hence, \(\varepsilon ^{n,{\text {c}}}_j\) provides a measure of the error relative to the size of the checkerboard. This error is relatively independent of the location of the calibration target within the large depth of field and the positions of the cameras and, therefore, independent of the apparent size of the checkerboard in the image plane. The final parameters for the calibration function F for each view are obtained from the minimization of the following summation:
$$\begin{aligned} \min _{K^{\text {c}}, R^{\text {c}}, {\mathbf {t}}^{\text {c}}, R^n, {\mathbf {t}}^n} \sum _{{\text {c}}}\sum _{j,n} \left( \varepsilon ^{n,{\text {c}}}_j \right) ^2. \end{aligned}$$
(7)
The resulting camera calibration is shown in Fig. 3.

3 Assessment of the calibration method

In the following, we evaluate the performance of the calibration. First, we report the extrinsic and intrinsic camera parameters from Table 1 and discuss their physical interpretation. Second, we assess the robustness and convergence of the method as a function of the number of calibration images used. Third, we study the spatial accuracy of the calibration and identify the sources of error.

3.1 Intrinsic and extrinsic camera parameters

First, we consider the numerical values of the extrinsic camera parameters obtained from a calibration, see Table 1. As discussed in Sect. 2.3, these parameters characterize the spatial position and orientation of each camera. The reconstructed positions of the cameras are in agreement with the experimental setup, with cameras 4 and 1 positioned above cameras 3 and 2, and cameras 4 and 3 positioned on the left-hand side while cameras 1 and 2 are located on the right-hand side (as shown in Fig. 1), see Table 1. Furthermore, the relative distances between cameras are also in agreement with the experimental scene as we find the horizontal distance between cameras \(\varDelta W\approx 5.6 \; \mathrm {m}\) and a vertical distance between cameras \(\varDelta H \approx 1.2 \; \mathrm {m}\), as deduced from Table 1. Likewise, the reconstructed camera orientations are consistent with the cameras on the right-hand side oriented with a positive angle \(\alpha\), while the cameras on the left-hand side are oriented with a comparable negative angle \(\alpha\).
In addition to reconstructing the position of the camera, the calibration procedure reconstructs the intrinsic camera properties, which we compare to the specification of the instrumentation. The coefficients of the camera calibration matrix K of Eq. 4 are provided for each camera in Table 1. We focus on the values of the focal length of the lenses and deduce an effective focal length \(f_{\mathrm {eff}}\) directly from the coefficients as
$$\begin{aligned} f_{\mathrm {eff}}=\sqrt{\frac{\alpha _x \alpha _y {\tilde{J}} {\tilde{n}}^2}{ r^2 }}, \end{aligned}$$
(8)
where r is the resolution of the camera sensor in \(\mathrm {px/mm}\), which is known from the camera specifications, \({\tilde{J}}\) represents a correction factor for the image expansion due to optical distortion (see Appendix D) and \({\tilde{n}}=n_{\mathrm {air}} / n_{\mathrm {water}}\) corrects for the magnification due to refraction at the air/water interface. Our calibration yields values for the effective focal length of the four cameras of \(f_{\mathrm {eff}} = 23.73 \pm 0.82 \; \mathrm {mm}\). This reconstruction of the focal length lies within 1–2 \(\%\) of the actual focal length \(f_{\mathrm {lens}} = 24 \; \mathrm {mm}\) of the lenses that were used. Hence, we find that both the extrinsic and intrinsic camera parameters deduced from our calibration procedure are in agreement with the dimensions and characteristics of our experimental setup.
Table 1
Numerical values of the calibration parameters
Camera
 
1
2
3
4
\({\tilde{J}}\)
\((-)\)
0.97
0.98
0.93
0.96
\(\alpha _x\)
\((\mathrm {px})\)
5166.3
5245.8
4982.9
5116.0
\(\alpha _y\)
\((\mathrm {px})\)
5056.3
5240.5
4941.3
5050.2
s
\((\mathrm {px})\)
− 48.5
− 63.5
− 189.4
− 249.5
\(p_x\)
\((\mathrm {px})\)
1888.8
1655.0
1419.6
1237.0
\(p_y\)
\((\mathrm {px})\)
1488.0
980.0
1530.5
1552.8
\(X^{\text {c}}\)
\((\mathrm {m})\)
2.873
2.878
− 2.872
− 2.87848
\(Y^{\text {c}}\)
\((\mathrm {m})\)
0.617
− 0.616
− 0.658
0.657
\(Z^{\text {c}}\)
\((\mathrm {m})\)
0.010
− 0.010
0.009
− 0.009
\(\alpha\)
\(({^\circ})\)
9.02
14.00
− 13.04
− 11.51
\(\beta\)
\(({^\circ})\)
− 5.18
− 3.93
2.56
− 5.19
\(\gamma\)
\(({^\circ})\)
− 2.45
− 0.81
− 2.29
0.76
\(f_{\mathrm {eff}}\)
\((\mathrm {mm})\)
23.91
24.66
22.67
23.68
\(\varepsilon ^{n,{\text {c}}}_j\)
\(\%\)
1.84
1.85
2.08
1.95
  
\(\pm 1.57\)
\(\pm 1.64\)
\(\pm 1.72\)
\(\pm 1.54\)
\((\varepsilon ^{n,{\text {c}}}_j)^*\)
\((\mathrm {px})\)
1.63
1.72
1.85
1.84
  
\(\pm 1.49\)
\(\pm 1.62\)
\(\pm 1.59\)
\(\pm 1.60\)
N
\((\#)\)
176
153
197
186
From top to bottom: the expansions factor of the distortion mapping \({\tilde{J}}\), the intrinsic camera properties from the matrix K, the extrinsic camera positions \(X, \; Y\) and Z and orientations in pitch–yaw–roll angles \(\alpha\), \(\beta\) and \(\gamma\) (Figs. 1, 2). The effective focal lengths \(f_{\mathrm {eff}}\) deduced from K with Eq. 8, the reprojection errors \(\varepsilon _j^{n,{\text {c}}}\) in percentage and equivalent pixel-dimensions \((\bullet )^*\), and the total number of calibration images N used for each camera

3.2 Convergence and robustness

The camera calibration is obtained by minimizing the sum of the squared reprojection errors \(\varepsilon _j^{n,{\text {c}}}\) over all four cameras, all N calibration images and all nodes on the checkerboard, see Eq. 7. The camera calibration converges to low values for \(\varepsilon _j^{n,{\text {c}}}\) see Table 1. Here, the average normalized reprojection error is on the order of \(\varepsilon _j^{n,{\text {c}}} \sim 2 \%\) of the size of a checkerboard tile, which corresponds to an error of less than \(1 \ \mathrm {cm}.\)
The camera calibration requires a minimum of two non-coplanar checkerboard images (Zhang 2000). Increasing the number of calibration images increases the sampling of the measurement volume and, therefore, improves the reliability of the calibration. We further characterize the performance of the method as a function of the number of calibration images used, by randomly selecting different subsets of checkerboard images.
We first consider the effective focal length \(f_{\mathrm {eff}}\) of Eq. 8 deduced from the matrix K to characterize the quality of the position in space of the checkerboards and the cameras. In Fig. 4, we select different subsets of calibration images ranging from \(N=2\) to \(N=50\) images and compute \(f_{\mathrm {eff}}\) from the associated calibration. Figure 4a represents the ratio between \(f_{\mathrm {eff}}\) and \(f_{\mathrm {lens}}\) as a function of N. With a low number of \(N=2\) to 10 calibration images, the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) is far from one and the focal length is under- and over-estimated by \(50\mathrm {\%}\), indicating an unreliable calibration. Increasing the number of calibration images to \(N=15\) shows that the estimated focal length \(f_{\mathrm {eff}}\) approaches \(f_{\mathrm {lens}}\) and represents a clear improvement of the calibration. Further increasing N beyond \(N=15\), the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) does not significantly converge further (Fig. 4a).
Second, we consider the reprojection errors \(\varepsilon ^{n,{\text {c}}}_j\) in Fig. 4b. For a low number of \(N=2\) to 10 calibration images, the average reprojection error is as high as \(\varepsilon ^{n,{\text {c}}}_j \sim 50\) to \(60 \ \%\). Increasing the number of calibration images from \(N=15\) to 50 shows an additional decrease of the normalized reprojection error from \(\varepsilon ^{n,{\text {c}}}_j \sim 5\) to \(2 \ \%\) (Fig. 4b) and inset. This shows that 15 calibration images are sufficient to achieve a valid calibration. Further increasing the number of calibration images improves the convergence for the camera calibration while the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) remains at a value close to 1.

3.3 Spatial accuracy of the camera calibration

By inverting the camera matrix K, one can directly associate an optical ray in the object domain to a point in the dewarped image plane. For an ideal calibration, the four optical rays associated with the images of the same point on each of the four cameras should intersect at a unique location in the object domain. In practice, the four optical rays are skew lines and do not intersect at a single point. Here, we characterize the spatial accuracy of our camera calibration by estimating the skewness among the four optical rays.
For this, we use the nodes identified at gridline intersections on the N calibration images. We proceed by evaluating the four optical rays associated with each node of each calibration image. We then triangulate the location of each node by finding the point \({\mathbf {X}}_j^n\) in the object domain that minimizes the sum of the squared distances from the point to the four optical rays. We report for each node the skewness \(s_j^n\) as the average distance from the triangulated location \({\mathbf {X}}_j^n\) to the four optical rays (see Appendix E for more detail).
The calibration images were acquired over the entire depth of the tank and used to characterize the spatial accuracy, by reporting the skewness as a function of the position along the Z-axis of the world coordinate system (Fig. 5a). We find the skewness to be mostly uniform over the large depth of the measurement volume \(Z=5-25 \ \mathrm {m}\). Our calibration yields a high spatial accuracy characterized by an average skewness of less than \(1 \ \mathrm {cm}\). Only a slight increase in the skewness can be observed towards the back of the aquarium although the average skewness still remains below \(1 \ \mathrm {cm}\). This small increase is due to the decrease in spatial resolution and the decrease in angles between the optical path from the different views.
The variation in skewness over the height and width of the tank are represented in Fig. 5b, c. Figure 5b is a map of the skewness in a XY-plane at the front of the aquarium, while Fig. 5c provides a similar map at the back of the aquarium. Both are qualitatively similar and the skewness remains small over the depth of the tank. For reference, Fig. 5d, e shows the sampling density, which indicates the number of checkerboard images that were used at a location to compute the calibration. It is noteworthy that the center of the tank was better sampled than the sides and the bottom. We find that the skewness, and hence the spatial accuracy, is relatively constant over a large part of the measurement volume, but increases towards the edges of the tank. This is likely a result of the lower sampling away from the camera center, as well as unresolved optical distortions at the edges of the image plane.

4 Application to field experiments

We demonstrate the effectiveness of the calibration method by performing three-dimensional measurements in the large aquarium at the Rotterdam Zoo. We begin by focusing on an element which is easily identifiable on each camera view and show that we can accurately triangulate the position. We end our validation by tracking the three-dimensional position of fish of various sizes that are freely swimming over the depth of the tank.
Large predatory fish in the aquarium, such as sharks, swim through the entire aquarium. They provide a good target to evaluate the robustness of the camera calibration as their sharply tipped fins provide easily recognizable and well-defined features. Figure 6 shows how accurately such features can be triangulated with our calibration method. We first identify the vertex of the right-hand side pectoral fin of a shark on each camera view. Similar to Sect. 3.3, we directly associate the vertices, identified on each of the four images, with four optical rays in the object domain. We triangulate the position of the vertex and find a skewness of only \(0.35 \ \mathrm {cm}\), which demonstrates the accuracy of the method. This small skewness is illustrated in Fig. 6, where, for each camera view, the optical rays associated with the other camera views are projected on the image plane onto the epipolar lines (Hartley and Zisserman 2004). The epipolar lines intersect nearly perfectly at the identified vertex of the shark fin. The inset in Fig. 6 presents a closeup view from which one can infer the reprojection error from the slight skew between the optical rays. This associated reprojection error is of only \(1.11 \ \mathrm {px}\).
Further, we demonstrate that our calibration supports the tracking of several fish simultaneously over large distances by tracking a small group of six tuna fish (Fig. 7). By triangulation, we reconstruct the three-dimensional time-resolved position of the group (Fig. 7b). Using an in-house automated tracking code, we track the group of six individual fish (tuna) swimming away from the cameras over a large distance of \(7 \ \mathrm {m}\). Together with the triangulated shark fin, this experiment demonstrates the robustness and accuracy of the calibration and its potential use in large-scale field experiments. The calibration supports accurate triangulation over a large distance along the depth of field. This makes it of interest to further application for the tracking of particles (Schanz et al. 2015), birds (Attanasi et al. 2015), insects (van der Vaart et al. 2019), fish and other animals, and the study of fluid motion using tomographic methods (Elsinga et al. 2006) for large-scale field experiments.

5 Conclusion

Here, we have described and characterized a versatile, accurate and robust calibration method, which supports the three-dimensional tracking and triangulation of multiple fish. Our method is of particular interest to large-scale fields experiments, when spatial access to the measurement volume is limited and laboratory equipment to precisely position the target cannot be installed. The method does not require a large calibration target to be moved with known displacements. Rather, we use a freely moving checkerboard calibration target, which is much smaller than the measurement volume itself. The calibration mapping uses the framework of projective geometry, which assumes linear ray tracing. It combines a pinhole camera model for the linear ray tracing, with a non-linear camera mappings commonly used in experimental fluid mechanics to correct for optical distortion. All the algorithms necessary for the implementation of the calibration method are described here with details provided in Appendices.
The calibration method has been implemented to obtain an accurate and consistent multiple-view camera calibration in the large-scale aquarium of the Rotterdam Zoo. Here, the calibration target was positioned arbitrarily by a team of divers. We have characterized the spatial accuracy of the calibration to be less than \(2 \%\) of the size of a checkerboard tile, corresponding to \(1 \ \mathrm {cm}\) over the entire measurement volume that spans several tens of meters. When correcting for the difference in refractive index of air and seawater, we find that our method reconstructs both the camera position and the intrinsic properties of the camera such as the focal length of the lenses. Selecting different subsets of calibration images in a quality assessment, we show that in our case 15 calibration images are sufficient to achieve a valid calibration. Increasing the number of checkerboard images and better sampling the measurement volume further improves the spatial accuracy. The resulting camera calibration allows accurate imaging and three-dimensional tracking over a large measurement volume, here with application to biological fluid mechanics and the tracking of fish.

Acknowledgements

We would like to thank the Rotterdam zoo for providing access to the large tank in the Oceanium. In particular, we would like to thank Mark de Boer providing access to the facilities and Martijn van der Veer for coordination of the diving activities. This work was supported by the Netherlands Organisation for Scientific Research (NWO) with project number 824.15.021 and by the H2020 European Research Council (Grant agreement no. 716712).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

Availability

The implementation of the camera calibration method in Matlab and an accompanying data set is freely available through Muller et al. (2019).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix

A: Optical distortion across an interface

Refraction across an interface between two media of different refractive index is governed by Snell’s law:
$$\begin{aligned} \frac{\sin (\theta _2)}{\sin (\theta _1)}={\tilde{n}}, \end{aligned}$$
where \({\tilde{n}}=n_1/n_2\) with n the refractive index, \(\theta _1\) is the incident angle and \(\theta _2\) is the exit angle to the normal vector on the interface. For a flat interface, the image plane of a camera can be warped parallel to the interface by the linear image mapping H:
$$\begin{aligned} H=\begin{bmatrix} A&{\mathbf {p}} \\ {\mathbf {v}}^T&1 \end{bmatrix}, \end{aligned}$$
where A is a \(2 \times 2\) matrix that describes an affine image transformation that changes the aspect ratio and skew of the image, \({\mathbf {p}}\) centers the image at the principle ray, and \({\mathbf {v}}^T\) describes a perspective change (Hartley and Zisserman 2004). With these notations, the distortion mapping can be written as
$$\begin{aligned} \hat{{\mathbf {x}}}=\frac{A{\mathbf {x}}+{\mathbf {p}}}{\sqrt{\lambda \Vert A{\mathbf {x}}+{\mathbf {p}} \Vert _2^2+({\mathbf {v}}^T {\mathbf {x}} + 1)^2}}, \end{aligned}$$
where \(\lambda =(1-{\tilde{n}}^2)/f\) with f the focal length in pixels dimensions.

B: Relative camera positioning from calibrated views

We determine the rigid body motion from a view c to another view \(c'\), by first finding the best rotation, using the Kabsch algorithm (Kabsch 1976). We first compute the cross-covariance matrix \(A=\sum _{n,j} {\mathbf {X}}^{n,{\text {c}}'}_j({\mathbf {X}}^{n,{\text {c}}}_j)^T\) using the paired object coordinates \({\mathbf {X}}^{n,{\text {c}}}_j\) and \({\mathbf {X}}^{n,{\text {c}}'}_j\) from multiple checkerboards. We then compute the singular value decomposition of the cross-covariance matrix \(A=USV^T\) and extract the best rotation matrix \(R^{{\text {c}},{\text {c}}'}\) as
$$\begin{aligned} R^{c,c'}=U \begin{bmatrix} 1&\quad 0&\quad 0 \\ 0& \quad 1& \quad 0 \\ 0& \quad 0& \quad \det {(UV^T)} \end{bmatrix} V^T . \end{aligned}$$
Knowing \(R^{{\text {c}},{\text {c}}'}\), we use the rigid body motion \({\mathbf {X}}^{n,{\text {c}}'}_j=R^{{\text {c}},{\text {c}}'}{\mathbf {X}}^{n,{\text {c}}}_j+{\mathbf {t}}^{{\text {c}},{\text {c}}'}\) to compute the translation vector \({\mathbf {t}}^{{\text {c}},{\text {c}}'}\) between the views.
From the relative position between the views we find a unique extrinsic camera positioning \(R^{\text {c}}\), \({\mathbf {t}}^{\text {c}}\) by taking the first view at reference and solving the minimization problem for the translation \({\mathbf {t}}^{\text {c}}\),
$$\begin{aligned} \begin{aligned}&\min _{{\mathbf {t}}^{\text {c}}} \sum _{{\text {c}} \ne {\text {c}}' ,{\text {c}}'}&\left\| R^{{\text {c}},{\text {c}}'}{\mathbf {t}}^{{\text {c}}'} - {\mathbf {t}}^{{\text {c}},{\text {c}}'} -{\mathbf {t}}^{{\text {c}}} \right\| ^2 \\&\ \text {subject to}&{\mathbf {t}}^1={\mathbf {0}} \end{aligned} \end{aligned}$$
and the rotation \(R^{\text {c}}\),
$$\begin{aligned} \begin{aligned}&\min _{R^{{\text {c}},{\text {c}}'}} \sum _{{\text {c}} \ne {\text {c}}' ,{\text {c}}'}&\left\| R^{{\text {c}},{\text {c}}'}R^{{\text {c}}'} - R^{\text {c}} \right\| _F^2 \\&\ \text {subject to}&R^1=I , \end{aligned} \end{aligned}$$
where F is the Frobenius norm that sums over all matrix components squared and I is the identity matrix. These linear least squares problems have a direct solution using methods described in Boyd and Vandenberghe (2004) and generalize to any number of views (Fig. 8).

C: Projected area of a planar object

A planar object with area \({\mathcal {A}}_{\mathrm {plane}}\) and local plane coordinates \({\mathbf {X}} =[X \; Y \; 0]^T\) is mapped to the dewarped image plane by \(\tilde{{\mathbf {x}}}=p( K [R \ {\mathbf {t}}] \tilde{{\mathbf {X}} })\). The projected area in the dewarped image plane \({\mathcal {A}}_{\mathrm {dewarped}}\) can be computed by integrating the determinant of the Jacobian of the projection map:
$$\begin{aligned} {\mathcal {A}}_{\mathrm {dewarped}}=\int _{{\mathcal {A}}_{\mathrm {plane}}} \vert \nabla p(X,Y) \vert \mathrm { dX dY} . \end{aligned}$$
This integral can be evaluated using standard numeric integration techniques.

D: Magnification of the distortion map

An image is dewarped according to the distortion map \(\hat{{\mathbf {x}}}=m({\mathbf {x}})\). Similar to Appendix C, the area deformation of the distortion map can be computed by integrating the determinant of the Jacobian:
$$\begin{aligned} {\mathcal {A}}_{\mathrm {dewarped}}=\int _{{\mathcal {A}}_{\mathrm {image}}} \vert \nabla {\mathbf {m}}({\mathbf {x}}) \vert {{\text {d}}x {\text {d}}y} . \end{aligned}$$
This integral can be used to correct the light intensity per pixel area for the varying magnification of the distortion map. The average area expansion of the map is found by integration over the complete image:
$$\begin{aligned} {\tilde{J}}=\frac{1}{{\mathcal {A}}_{\mathrm {image}}} \int _{{\mathcal {A}}_{\mathrm {image}}} \vert \nabla {\mathbf {m}}({\mathbf {x}}) \vert {{\text {d}}x {\text {d}}y} . \end{aligned}$$

E: Point triangulation and skewness

A point \({\mathbf {X}}\) in the object domain is triangulated by minimizing the point-line distance to the optical rays from the different cameras. The optical rays associated with each view are computed by inverting the camera calibration matrix \(K^{\text {c}} \setminus \tilde{{\mathbf {x}}}^{\text {c}} k^{\text {c}}\), where \(k^{\text {c}}\) scales the depth of field and \(\tilde{{\mathbf {x}}}=[{\hat{x}} \ {\hat{y}} \ 1]^T\). The object location is then triangulated by the linear least squares problem:
$$\begin{aligned} \min _{{\mathbf {X}},k^{\text {c}}} \sum _{{\text {c}}} \left\| K^{\text {c}} \setminus \tilde{{\mathbf {x}}}^{\text {c}} k^{\text {c}} - [R^{\text {c}} \ {\mathbf {t}}^{\text {c}} ] \tilde{{\mathbf {X}}} \right\| ^2. \end{aligned}$$
We then compute the skewness s as the average point–line distance between the optical rays and the object location by
$$\begin{aligned} s\ = \frac{1}{C} \sum _{\text {c}}^C \left\| K^{\text {c}} \setminus \tilde{{\mathbf {x}}}^{\text {c}} k^{\text {c}} - [R^{\text {c}} \ {\mathbf {t}}^{\text {c}} ] \tilde{{\mathbf {X}}} \right\| ^2. \end{aligned}$$
Literatur
Zurück zum Zitat Adrian RJ, Westerweel J (2011) Particle image velocimetry, 1st edn. Cambridge University Press, Cambridge (ISBN: 978-0-521-44008-0)MATH Adrian RJ, Westerweel J (2011) Particle image velocimetry, 1st edn. Cambridge University Press, Cambridge (ISBN: 978-0-521-44008-0)MATH
Zurück zum Zitat Boyd S, Vandenberghe L (2004) Convex optimization, 2nd edn. Cambridge University Press, Cambridge (ISBN: 9780521833783)CrossRef Boyd S, Vandenberghe L (2004) Convex optimization, 2nd edn. Cambridge University Press, Cambridge (ISBN: 9780521833783)CrossRef
Zurück zum Zitat Fryer JG, Brown DC (1986) Lens distortion for close-range photogrammetry. Photogramm Eng Remote Sens 52(1):51–58 Fryer JG, Brown DC (1986) Lens distortion for close-range photogrammetry. Photogramm Eng Remote Sens 52(1):51–58
Zurück zum Zitat Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge (ISBN: 0521540518)CrossRef Hartley RI, Zisserman A (2004) Multiple view geometry in computer vision, 2nd edn. Cambridge University Press, Cambridge (ISBN: 0521540518)CrossRef
Zurück zum Zitat Schanz D, Schrder A, Gesemann S, Wieneke B (2015) ’shake the box’: Lagrangian particle tracking in densily seeded flows at high spatial resolution. In: International symposium on turbulence and shear flow phenomena Schanz D, Schrder A, Gesemann S, Wieneke B (2015) ’shake the box’: Lagrangian particle tracking in densily seeded flows at high spatial resolution. In: International symposium on turbulence and shear flow phenomena
Zurück zum Zitat Sedlazeck A, Koch R (2012) Perspective and non-perspective camera models in underwater imaging—overview and error analysis. In: Dellaert F, Frahm J, Leal-Taix MPL, Rosenhahn B (eds) Outdoor and large-scale real-world scene analysis, vol 37. Springer, Berlin, pp 212–242CrossRef Sedlazeck A, Koch R (2012) Perspective and non-perspective camera models in underwater imaging—overview and error analysis. In: Dellaert F, Frahm J, Leal-Taix MPL, Rosenhahn B (eds) Outdoor and large-scale real-world scene analysis, vol 37. Springer, Berlin, pp 212–242CrossRef
Zurück zum Zitat Soloff SM, Adrian RJ, Liu ZC (1997) Distortion compensation for generalized stereoscopic particle image velocimetry. Meas Sci Technol 8(12):1441–1454CrossRef Soloff SM, Adrian RJ, Liu ZC (1997) Distortion compensation for generalized stereoscopic particle image velocimetry. Meas Sci Technol 8(12):1441–1454CrossRef
Metadaten
Titel
Calibration of multiple cameras for large-scale experiments using a freely moving calibration target
verfasst von
K. Muller
C. K. Hemelrijk
J. Westerweel
D. S. W. Tam
Publikationsdatum
01.01.2020
Verlag
Springer Berlin Heidelberg
Erschienen in
Experiments in Fluids / Ausgabe 1/2020
Print ISSN: 0723-4864
Elektronische ISSN: 1432-1114
DOI
https://doi.org/10.1007/s00348-019-2833-z

Weitere Artikel der Ausgabe 1/2020

Experiments in Fluids 1/2020 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.