Skip to main content
Erschienen in: International Journal of Computer Vision 1/2018

Open Access 20.08.2017

Baseline and Triangulation Geometry in a Standard Plenoptic Camera

verfasst von: Christopher Hahne, Amar Aggoun, Vladan Velisavljevic, Susanne Fiebig, Matthias Pesch

Erschienen in: International Journal of Computer Vision | Ausgabe 1/2018

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this paper, we demonstrate light field triangulation to determine depth distances and baselines in a plenoptic camera. Advances in micro lenses and image sensors have enabled plenoptic cameras to capture a scene from different viewpoints with sufficient spatial resolution. While object distances can be inferred from disparities in a stereo viewpoint pair using triangulation, this concept remains ambiguous when applied in the case of plenoptic cameras. We present a geometrical light field model allowing the triangulation to be applied to a plenoptic camera in order to predict object distances or specify baselines as desired. It is shown that distance estimates from our novel method match those of real objects placed in front of the camera. Additional benchmark tests with an optical design software further validate the model’s accuracy with deviations of less than ±0.33% for several main lens types and focus settings. A variety of applications in the automotive and robotics field can benefit from this estimation model.
Hinweise
Communicated by Yasuhiro Mukaigawa.
This research was supported in part by the EU as Project 3D VIVANT under EU-FP7 ICT-2010-248420.

1 Introduction

Computer vision has been striving to recreate our human visual perception. Wheatstone’s fundamental observations (Wheatstone 1838) state that a set of solely two adjacent cameras facilitates imitating a human’s binocular vision. Using these two images in conjunction with a stereo display technique, e.g. stereoscopic glasses (Huang et al. 2015), allows for the reproduction of depth as perceived by human eyes. With regard to the location in object space, however, this stereo vision system concedes much more freedom than the human’s perception as the distance between cameras, called baseline, may vary. Hence, the flexibility in camera stereoscopy makes it possible to adapt to particular depth scenarios. For example, triangulation is used in stellar parallax to measure the distance to stars (Hirshfeld 2001). What applies to a macroscopic universe, may also be useful for a microscope.
However, miniaturising multiple stereo setups to the level as required by microscopes poses a problem to hardware fabrication since lens diameters restrict baseline gaps between cameras. As an alternative, a Micro Lens Array (MLA) may be placed in front of an image sensor of an otherwise conventional microscope (Levoy et al. 2006; Broxton et al. 2013), which is generally known as a light field camera. An obvious attempt to regard the micro lens pitch as the baseline proves to be impractical as optical parameters of the objective lens affect a light field’s geometry (Hahne et al. 2014a, b).
The light field camera, also known as plenoptic camera, was adopted to the field of computer vision ever since Adelson and Wang (1992) published an article, which coined the term plenoptic deduced from Latin and Greek meaning “full view”. The authors were the first to computationally generate a depth map by solving the stereo correspondence problem based on footage from a plenoptic camera and concluded that its baseline is confined to the main lens’ aperture size. Although Adelson and Wang could not provide methods to acquire quantitative baseline measures, the authors predicted the baseline to be relatively small. When Levoy and Hanrahan (1996) proposed a concise 4-D light field notation, each ray in the light field could be represented by merely four coordinates (uvst) obtained from the rays’ intersection at two two-dimensional (2-D) planes placed behind one another. In respect of a plenoptic camera, these sampling planes may be represented by MLA and image sensor. In case of a plenoptic camera, maximum directional light field resolution is captured when focusing micro lenses to infinity (Ng 2006), which is accomplished by placing the MLA stationary one focal length in front of the sensor. This plenoptic camera type has been made commercially available by Lytro Inc. (2012) and is capable of synthetically focusing images (Ng et al. 2005; Fiss et al. 2014; Hahne et al. 2016).
By shifting the sensor away from the MLA focal plane, research has shown that the spatial and directional resolution can be traded off, which involves different image synthesis approaches (Lumsdaine and Georgiev 2008; Georgiev et al. 2006). To distinguish between these optical setups, Lytro’s camera was later named Standard Plenoptic Camera (SPC) in a publication by Perwass and Wietzke (2012), who devised a more complex MLA that features different micro lens types. The spatio-angular trade-off in a plenoptic camera is determined by diameter, focal length, image position and packing of the micro lenses, just as the sensor pixel pitch, which thus makes it part of the optical hardware design.
Over the years, several studies have provided different methods to acquire disparity maps from an SPC (Heber and Pock 2014; Bok et al. 2014; Jeon et al. 2015; Tao et al. 2017). To the best of our knowledge, researchers have not dealt with the estimation of an object’s distance using triangulation on the basis of disparity maps obtained from a light field camera. One reason might have been that baselines are required, which are not obvious in the case of plenoptic cameras as the optics involved is more complex than with conventional stereoscopy. Attempts to estimate a plenoptic camera’s baseline were initially addressed in publications by our research group (Hahne et al. 2014a, b), which provided validation through simulation only. Besides, main lens pupil positions have been ignored in this work, yielding large deviations when estimating the distance to refocused image planes obtained from an SPC (Hahne et al. 2016). It is thus expected that our previous triangulation scheme (Hahne et al. 2014a, b) entails errors in the experimentation which is subject to investigation. A more recent study by Jeon et al. (2015) has also proposed a baseline estimation method without giving details on the optical groundwork and lacking validation activities.
In this paper, we propose a refined optics-geometrical model for light field triangulation and estimate object distances captured by an SPC. Our plenoptic model is the first to pinpoint virtual cameras along the entrance pupil of the objective lens. Verification is accomplished through real images from a custom-built SPC and a ray tracing simulator (Zemax 2011) for a quantitative deviation assessment. A top-level overview of the processing pipeline for experimental validation is given in Fig. 1. By doing so, we obtain much more accurate baseline and object distance results than by our previous method (Hahne et al. 2014a) and Jeon et al. (2015). The proposed concept will prove to be valuable in fields where stereo vision is traditionally used.
This paper has been organised in the following way. Section 2 briefly reviews the binocular vision concept by means of the geometry in order to recall stereo triangulation. This is followed by a step-wise development of an SPC ray model in Sect. 3 where the extraction of viewpoints images from a raw SPC capture is also demonstrated. Experimental work is presented in Sect. 4, which aims to assess claims made in Sect. 3 by measuring baseline and tilt angle from a disparity map analysis and a ray tracing simulation (Zemax 2011). Results are summarised and discussed in Sect. 5.

2 Stereoscopic Triangulation

2.1 Coplanar Stereo Cameras

The SPC can be seen as a complex derivative of a stereo vision system. The stereo triangulation concept is presented hereafter to serve as a groundwork.
Figure 2 illustrates a stereoscopic camera setup where sensors are coplanar. The depicted setup may be parameterised by the spacing of the cameras’ axes, denoted as B for baseline, the cameras’ image distance b and the optical centres \(O_L\), \(O_R\) for each camera, respectively. As seen in the diagram, an object point is projected onto both camera sensors indicated by orange dots. With regard to corresponding image centres, the position of the image point in the left camera clearly differs from that in the right. This phenomenon is known as parallax and results in a relative displacement of respective image points from different viewpoints. To measure this displacement, the horizontal disparity \(\Delta x\) is introduced given by \(\Delta x = x_R - x_L\), where \(x_R\) and \(x_L\) denote horizontal distances from each projected image point to the optical image centre. Nowadays, image detectors are composed of discrete photosensitive cells making it possible to locate and measure \(\Delta x\). The disparity computation is a well studied task (Marr and Poggio 1976; Yang et al. 1993; Bobick and Intille 1999) and is often referred to as solving the correspondence problem. Algorithmic solutions to this are applied to a set of points in the image rather than a single one and thus yield a map of \(\Delta x\) values, which indicate the depth of a captured scene.
An object point’s depth distance Z can be directly fetched from parameters in Fig. 2. As highlighted with a dark tone of grey, \(\Delta x\) may represent the base of any acute scalene triangle with b as its height. Another triangle spanned by the base B and height Z is a scaled version of it and shown in light grey. This relationship relies on the method of similar triangles and can be written as an equality of ratios
$$\begin{aligned} \frac{Z}{B} = \frac{b}{\Delta x} . \end{aligned}$$
(1)
To infer the depth distance Z, Eq. (1) may be rearranged to
$$\begin{aligned} Z = \frac{b \times B}{\Delta x}. \end{aligned}$$
(2)
As seen by these equations, it is feasible to retrieve information about the depth location Z. Likewise, if \(\Delta x\) is constant, it may be obvious that by decreasing the baseline B, the object distance Z shrinks. Given a case where the depth range is located at a far distance, it is thus recommended to aim for a large baseline. Note that this relationship and corresponding mathematical statements only hold for cases where optical axes of \(O_L, O_R\) are aligned in parallel.

2.2 Tilted Stereo Cameras

Reasonable scenarios exist in which a camera’s optical axis is tilted with respect to the other. In such a case, the principle of similar triangles does not apply in the same manner as in Eq. (1).
Taking the left camera as the orientation reference, the right lens \(O_R\) is seen to be tilted as shown in Fig. 3. In this case, perspective image rectification is commonly employed to correct for non-coplanar stereo vision setups (Burger and Burge 2009). Iocchi (1998) concludes that optical axes intersect in a point \(Z_0\) as both axes lie on the xz plane if angle rotation occurs around the y-axis, whereas image planes of both cameras are still seen to be parallel. In traditional stereo vision, this yields deviations such that Iocchi’s (1998) method serves as a first-order approximation for small angle rotations in the absence of image processing. As demonstrated in Sect. 3.2, this approach, however, is suitable for our plenoptic triangulation model where imaginary sensor planes of virtual cameras are coplanar, whilst their optical axes may be non-parallel. Let \(\varPhi \) be the rotation angle, then laws of trigonometry allow to put
$$\begin{aligned} Z_0=\frac{B}{\tan (\varPhi )} \end{aligned}$$
(3)
and
$$\begin{aligned} Z = \frac{b \times B}{\Delta x + \frac{b \times B}{Z_0}} \end{aligned}$$
(4)
which may be shortened to
$$\begin{aligned} Z = \frac{b \times B}{\Delta x + b \times \tan (\varPhi )} \, \end{aligned}$$
(5)
after substituting for \(Z_0\). This approximation suffices to estimate the depth Z for small rotation angles \(\varPhi \) in stereoscopic systems without the need of an image rectification.

3 SPC Ray Model

To conceptualise a light field ray model for an SPC, we start tracing rays from the sensor side to the object space. For simplification, we consider chief rays only and follow their path from each sensor’s pixel centre at micro image domain u to the optical centre of its corresponding micro lens \(s_j\) with lens index j. In an SPC, the spacing between MLA and image sensor plane amounts to the micro lens focal length \(f_s\). Figure 4 visualises chief rays travelling through a micro lens and the objective lens indicating Micro Image Centres (MICs). With the aid of ray geometry, an MIC is found by a chief ray connecting an optical centre of a micro lens with that of the main lens. MICs play a key role in realigning a light field from an SPC and are locally obtained by \(c={(M - 1)}/{2}\), where M indicates one-dimensional (1-D) micro image resolutions, which are seen to be consistent. Discrete micro image points in the horizontal direction are then indexed by \({c+i}\), where \({i \in [-c,c]}\) such that 1-D micro image samples are given as \(u_{c+i,j}\).
In earlier publications (Hahne et al. 2014a, b), it was assumed that MICs lie on the optical axes of corresponding micro lenses. However, it has been argued that this assumption would only be true if the distance between objective lens and MLA were infinitely large (Dansereau 2014). Due to the finite separation, MICs are displaced from their micro lens optical axes. A more accurate approach in estimating MIC positions is to model chief rays in a way that they connect optical centres of micro and main lenses (Dansereau et al. 2013). In Fig. 4b we further refine this hypothesis by regarding the centre of an exit pupil \(A'\) to be the origin from which MIC chief rays arise. Detecting MICs correctly is essential for our geometrical light ray model because MICs serve as reference points in the viewpoint image synthesis.
Figure 5 depicts our more advanced model that combines statements made about light rays’ paths in an SPC. For clarity, the main lens U is depicted as a thin lens meaning that the exit pupil centre coincides with the optical centre. However, the distinction is maintained in the following.

3.1 Viewpoint Extraction

It has been shown in Adelson and Wang (1992), Ng (2006), Dansereau (2014), Bok et al. (2014) that extracting viewpoints from an SPC can be attained by collecting all pixels sharing the same respective micro image position. To comply with provided notations, a 1-D sub-aperture image \(E_{i}\left[ s_j\right] \) with viewpoint index i is computed with
$$\begin{aligned} E_{i}\left[ s_j\right] = E_{f_s}\left[ s_j, \, u_{c+i}\right] \end{aligned}$$
(6)
where u and c have been omitted in the subscript of \(E_{i}\) since i is a sufficient index for sub-aperture images in the 1-D row. Equation (6) implies that the effective viewpoint resolution equals the number of micro lenses. Figure 6 depicts the reordering process producing 2-D sub-aperture images \(E_{(i,g)}\) by means of index variables \(\left[ s_j, \, t_h\right] \) and \(\left[ u_{c+i} \, , \, v_{c+g}\right] \) for spatial and directional domains, respectively. As can be seen from colour-highlighted pixels, samples at a specific micro image position correspond to the respective viewpoint location in a camera array.
Since raw SPC captures do not naturally feature the \(E_{f_s}\left[ s_j, \, u_{c+i}\right] \) index notation, it is convenient to define an index translation formula, considering the light field photograph to be of two regular sensor dimensions \(\left[ x_k, \, y_l\right] \) as taken with a conventional sensor. In the horizontal dimension indices are converted by
$$\begin{aligned} k =j \times M+c+i, \end{aligned}$$
(7)
which means that \(\left[ x_k\right] \) is formed by
$$\begin{aligned} \left[ x_k\right] = \left[ x_{j \times M+c+i}\right] = \left[ s_j, \, u_{c+i}\right] . \end{aligned}$$
(8)
bearing in mind that M represents the 1-D micro image resolution. Similarly, the vertical index translation may be
$$\begin{aligned} l = h \times M+c+g \end{aligned}$$
(9)
and therefore
$$\begin{aligned} \left[ y_l\right] = \left[ y_{h \times M+c+g}\right] = \left[ t_h, \, v_{c+g}\right] . \end{aligned}$$
(10)
These definitions comply with Fig. 6 and enable to apply our 4-D light field notation \(\left[ s_j, \, u_{c+i}, \, t_h, \, v_{c+g}\right] \) to conventionally 2-D sampled representations \(\left[ x_k, \, y_l\right] \), where k and l start to count from index 0. To apply the proposed ray model and image process, the captured light field has to be calibrated and rectified such that the centroid of each micro image coincides with the centre of a central pixel. This requires an image interpolation with sub-pixel precision, which was first pointed out by Cho et al. (2013) and confirmed by Dansereau et al. (2013).

3.2 Virtual Camera Array

In the previous section, it was shown how to render multi-views from SPC photographs by means of the proposed ray model. Because a 4-D plenoptic camera image can be reorganised to a set of multi-view images as if taken with an array of cameras, it is supposed that each of these images has an optical centre of a so-called virtual camera with a distinct location. The localisation of such is, however, not obvious. This problem was first recognised and addressed in publications by our research group (Hahne et al. 2014a, b), but, however, lacked of experimental verification. As a starting point, we deploy ray functions that proved to be viable to pinpoint refocused SPC image planes (Hahne et al. 2016) and further refine the model by finding intersections along the entrance pupil. Once theoretical positions of virtual cameras are derived, we examine in which way the well established concept of stereo triangulation (see Sect. 2) applies to the proposed SPC ray model.
In order to geometrically describe rays in the light field, we first define the height of optical centres \(s_j\) in the MLA by
$$\begin{aligned} s_j = (j-o) \times p_{M} \end{aligned}$$
(11)
with \(o = (J-1) /2\) as the index of the central micro lens where J is the overall number of micro lenses in the horizontal direction. Geometrical MIC positions are denoted as \(u_{c,j}\) and can be found by tracing main lens chief rays travelling through the optical centre of each micro lens. This is calculated by
$$\begin{aligned} u_{c,j} = \frac{s_j}{d_{A'}} \times f_s + s_j, \end{aligned}$$
(12)
where \(f_s\) is the micro lens focal length and \(d_{A'}\) is the distance from MLA to exit pupil of the main lens, which is illustrated in Fig. 4b. Micro image sampling positions that lie next to MICs can be acquired by a corresponding multiple i of the pixel pitch \(p_p\) as given by
$$\begin{aligned} u_{c+i,j} = u_{c,j} + i \times p_p. \end{aligned}$$
(13)
Chief ray slopes \(m_{c+i,j}\) that impinge at micro image positions \(u_{c+i,j}\) can be acquired by
$$\begin{aligned} m_{c+i,j} = \frac{s_j - u_{c+i,j}}{f_s}. \end{aligned}$$
(14)
Let \(b_U\) be the objective’s image distance, then a chief ray’s intersection at the refractive main lens plane \(U_{i,j}\) is given by
$$\begin{aligned} U_{i,j} = m_{c+i,j} \times b_U + s_j. \end{aligned}$$
(15)
where c has been left out in the subscript of \(U_{i,j}\) as it is a constant and will be omitted in following ray functions for simplicity. The spacing between principal planes of an objective lens will be taken into account at a later stage.
Since the main lens works as a refracting element, chief rays possess different slopes in object space, which can be calculated as follows
$$\begin{aligned} F_{i,j}= m_{c+i,j} \times f_U, \end{aligned}$$
(16)
with a chief ray passing through a point \(F_{i,j}\) along the main lens focal plane F by means of its image side slope \(m_{c+i,j}\) and the main lens focal length \(f_U\). Consequentially, a chief ray slope \(q_{i,j}\) of that beam in object space is given by
$$\begin{aligned} q_{i,j} = \frac{F_{i,j} - U_{i,j}}{f_U} \end{aligned}$$
(17)
as it depends on the intersections at refractive main lens plane U, focal plane \(F_U\) and the chief ray’s travelling distance, which is \(f_U\) in this particular case. With reference to preliminary remarks, an object ray’s path may be provided as a linear function \(\widehat{f}_{i,j}\) of the depth z, which is written as
$$\begin{aligned} \widehat{f}_{i,j}(z)&= q_{i,j} \times z+U_{i,j},\quad z \in \left[ U,\infty \right) . \end{aligned}$$
(18)
As the name suggests, sub-aperture images are created at the main lens’ aperture. To investigate ray positions at the aperture, it is worth introducing the aperture’s geometrical equivalents to the proposed model, which have not been considered in our previous publications (Hahne et al. 2014a). An obvious attempt would be to locate a baseline \(B_{A'}\) at the exit pupil, which is found by
$$\begin{aligned} B_{A'} = m_{c+i,j} \times d_{A'}, \end{aligned}$$
(19)
where \(m_{c+i,j}\) is obtained from Eq. (14). Practical applications of an image-side baseline \(B_{A'}\) are unclear at this stage.
However, the baseline at the entrance pupil \(A''\) is a much more valuable parameter when determining an object distance via triangulation in an SPC. Figure 7 offers a closer look at our light field ray model by also showing principal planes \(H_{1U}\) and \(H_{2U}\). There, it can be seen that all rays having i in common (e.g. blue rays) geometrically converge to the entrance pupil \(A''\) and diverge from the exit pupil \(A'\). Intersecting chief rays at the entrance pupil can be seen as indicating object-side-related positions of virtual cameras \(A''_{i}\).
The calculation of virtual camera positions \(A''_{i}\) is provided in the following. By taking object space ray functions \(\widehat{f}_{i,j}(z)\) from Eq. (18) for two rays with different j but same i and setting them equal as given by
$$\begin{aligned} q_{i,o}\times z + U_{i,o} = q_{i,o+1}\times z + U_{i,o+1}, \, \, z \in \left( -\infty , \infty \right) , \end{aligned}$$
(20)
we can solve for the equation system which yields a distance \(\overline{A''H_{1U}}\) from entrance pupil \(A''\) to object-side principal plane \(H_{1U}\) (see Fig. 7). Recall that the index for the central micro lens \(s_j\) is found by https://static-content.springer.com/image/art%3A10.1007%2Fs11263-017-1036-4/MediaObjects/11263_2017_1036_IEq101_HTML.gif with o as the image centre offset.
Table 1
Micro lens specifications for \(\lambda =550\) nm
MLA
\(f_s\) (mm)
\(p_M\) (\(\upmu \)m)
\(t_s\) (mm)
\(n(\lambda )\)
\(R_{s1}\)
\(R_{s2}\)
\(\overline{H_{1s}H_{2s}}\) (mm)
(I.)
1.25
125
1.1
1.5626
0.70325
\(-\infty \)
0.396
(II.)
2.75
125
1.1
1.5626
1.54715
\(-\infty \)
0.396
Table 2
Main lens parameters
Focus
Image distance
Exit pupil position
\(d_f\)
\(b_U\) (mm)
\(d_{A'}\) (mm)
\(f_{193}\)
\(f_{90}\)
\(f_{197}\)
\(f_{193}\)
\(f_{90}\)
\(f_{197}\)
\(\infty \)
193.2935
90.4036
197.1264
111.0324
85.1198
100.5000
4 m
208.3930
111.7666
3 m
207.3134
93.3043
125.0523
88.0205
1.5 m
225.8852
96.6224
143.6241
91.3386
Principal plane separation
\(\overline{H_{1U}H_{2U}}\) (mm)
\(f_{193}\)
\(f_{90}\)
\(f_{197}\)
−65.5563
−1.2273
147.4618
The object-side-related position of \(A''_i\) can be acquired by
$$\begin{aligned} A''_i = q_{i,o} \times \overline{A''H_{1U}} + U_{i,o}. \end{aligned}$$
(21)
With this, a baseline \(B_{G}\) that spans from one \(A''_i\) to another by gap G can be obtained as follows
$$\begin{aligned} B_{G} = A''_{i} + A''_{i+G}. \end{aligned}$$
(22)
For example, a baseline \(B_1\) ranging from \(A''_{0}\) to \(A''_{1}\) is identical to that from \(A''_{-1}\) to \(A''_{0}\). This relies on the principle that virtual cameras are separated by a consistent width. To apply the triangulation concept, rays are virtually extended towards the image space by
$$\begin{aligned} N_{i,j} = -q_{i,j} \times b_N + A''_{i}, \end{aligned}$$
(23)
where \(b_N\) is an arbitrary scalar which can be thought of as a virtual image distance and \(N_{i,j}\) as a spatial position at the virtual image plane of a corresponding sub-aperture. The scalable variable \(b_N\) linearly affects a virtual pixel pitch \(p_N\), which is found by
$$\begin{aligned} p_N = \big |N_{i,o} - N_{i,o+1}\big |. \end{aligned}$$
(24)
Setting \(b_U=f_U\) aligns optical axes \(z'_i\) of virtual cameras to be parallel to the main optical axis \(z_U\) (see Fig. 7). For all other cases where \(b_U\ne f_U\) (e.g. Fig. 8), the rotation angle \(\varPhi _i\) of a virtual optical axis \(z'_i\) is obtained by
$$\begin{aligned} \varPhi _i = \arctan {\left( q_{i,o}\right) }. \end{aligned}$$
(25)
The relative tilt angle \(\varPhi _G\) from one camera to another can be calculated with
$$\begin{aligned} \varPhi _{G} = \varPhi _{i} + \varPhi _{i+G}, \end{aligned}$$
(26)
which completes the characterisation of virtual cameras.
Figure 8 visualises chief rays’ paths in the light field when focusing the objective lens such that \(b_U>f_U\). In this case, \(z'_i\) intersects with \(z_U\) at the plane at which the objective lens is focusing. Objects placed at this plane possess a disparity \(\Delta x = 0\) and thus are expected to be located at the same relative 2-D position in each sub-aperture image. As a consequence, objects placed behind the \(\Delta x = 0\) plane expose negative disparity.
Establishing the triangulation in an SPC allows object distances to be retrieved just as in a stereoscopic camera system. On the basis of Eq. (5), a depth distance \(Z_{G,\Delta x}\) of an object with certain disparity \(\Delta x\) is obtained by
$$\begin{aligned} Z_{G,\Delta x} = \frac{b_N \times B_{G}}{\Delta x \times p_N + b_N \times \tan \left( \varPhi _G\right) } \end{aligned}$$
(27)
and can be shortened to
$$\begin{aligned} Z_{G,\Delta x} = \frac{b_N \times B_{G}}{\Delta x \times p_N}, \quad \text {if} \, \, \varPhi _G = 0 \end{aligned}$$
(28)
which is only the case where \(b_U=f_U\). One may notice that Eq. (28) is an adapted version of the well-known triangulation equation given in Eq. (2).

4 Validation

We deploy a custom-made plenoptic camera containing a full frame sensor with 4008 \(\times \) 2672 active image resolution and \(p_p~=~9~\mu \)m pixel pitch. Photos of our camera are depicted in Fig. 9. Details on the assembly and optical calibration of an SPC can be found in Hahne’s thesis (2016). Lens and MLA specifications are provided hereafter.

4.1 Lens Specification

Experimentations are conducted with two different micro lens designs, denoted as MLA (I.) and (II.), which can be found in Table 1. Input parameters relevant to the triangulation are \(f_s\) and \(p_m\). Besides this, Table 1 provides the lens thickness \(t_s\), refractive index n, radii of curvature \(R_{s1}\), \(R_{s2}\) and principal plane distance \(\overline{H_{1s}H_{2s}}\). The number of micro lenses in our MLA amounts to 281 \(\times \) 188 for horizontal and vertical dimensions, respectively. These values allow for modelling the micro lenses in an optical design software.
Table 3
Baseline results \(B_G\) with infinity focus \((b_U=f_U)\)
(a) \(B_4\) from Fig. 10b, d
(b) \(B_8\) from Fig. 10c
\(\Delta x\)
\(Z_{G,\Delta x}\) (cm)
Measured \(B_4\) (mm)
\(\Delta x\)
\(Z_{G,\Delta x}\) (cm)
Measured \(B_8\) (mm)
2
203
2.5806
4
203
5.1611
3
136
2.5806
6
136
5.1611
3.5
116
2.5806
7
116
5.1611
4
102
2.5806
8
102
5.1611
(c) Comparison of predicted and measured \(B_G\) where \(d_f \rightarrow \infty \)
 
\(B_G\)
Predicted
Avg. measured
Deviation
\(B_G\) (mm)
\(B_G\) (mm)
\(ERR_{B_G}\) (%)
Proposed
\(B_4\)
2.5806
2.5806
0.0000
\(B_8\)
5.1611
5.1611
0.0000
Hahne et al. (2014a, b)
\(B_4\)
2.5806
12.0566
−367.2090
\(B_8\)
5.1611
24.1133
−367.2090
It is well known that the focus ring of today’s objective lenses moves a few lens groups whilst others remain static, which, in consequence, changes the lens system’s cardinal points. To prevent this and simplify the experimental setup, we only shift the plenoptic sensor away from the main lens to vary its image distance \(b_U\) by keeping the focus ring at infinity. In doing so, we assure cardinal points remain at the same relative position. However, the available space in our customised camera constrains the sensor’s shift range to an overall focus distance of \(d_f \approx \) 4 m where \(d_f\) is the distance from the MLA’s front vertex to the plane that the main lens is focused on. For this reason, we examine two focus settings \((d_f \rightarrow \infty ~\text {and}~d_f \approx ~4~\text {m})\) in the experiment. To acquire the main lens image distance \(b_U\), we employ the thin lens equation and solve for \(b_U\) as given by
$$\begin{aligned} b_U = \left( \frac{1}{f_U} - \frac{1}{a_U}\right) ^{-1}, \end{aligned}$$
(29)
with \(a_U=d_f - b_U - \overline{H_{1U}H_{2U}}\) as the object distance. After substituting for \(a_U\), however, it can be seen that \(b_U\) is an input and output parameter at the same time, which turns out to be a typical chicken-and-egg case. To treat this problem, we define the initial image distance to be the focal length (\(b_U:=f_U\)) and substitute the resulting \(b_U\) for the input variable afterwards. This procedure is iterated until both values are the same. Objective lenses are denoted as \(f_{193}\), \(f_{90}\) and \(f_{197}\) with index numbers representing focal lengths in millimetres. The lens designs for \(f_{193}\) and \(f_{90}\) were found in Caldwell (2000), Yanagisawa (1990) whilst \(f_{197}\) is obtained experimentally using the technique provided by TRIOPTICS (2015). Table 2 lists calculated image, exit pupil and principal plane distances for the main lenses. It is noteworthy that all parameters are provided with respect to 550 nm wavelength. Precise focal lengths \(f_U\) are found in the image distance column at the infinity focus row.

4.2 Experiments

To verify claims made about SPC triangulation, experiments are conducted as follows. Baselines and tilt angles are estimated based on Eqs. (22) and (26) using parameters given in Tables 1 and 2. Thereof, we compute object distances from Eq. (27) for each disparity and place real objects at the calculated distances. Experimental validation is achieved by comparing predicted baselines with those obtained from disparity measurements. The extraction of a disparity map from an SPC requires at least two sub-aperture images that are obtained using Eq. (6). Disparity maps are calculated by block matching with the Sum of Absolute Differences (SAD) method using an available implementation (Abbeloos 2010, 2012). To measure baselines, Eq. (27) has to be rearranged such that
$$\begin{aligned} B_{G} = \frac{Z_{G,\Delta x} \times \left( \Delta x \times p_N + b_N \times \tan \left( \varPhi _G\right) \right) }{b_N}. \end{aligned}$$
(30)
This formula can also be written as
$$\begin{aligned} \varPhi _G = \arctan \left( \frac{\frac{B_{G} \times b_N}{Z_{G,\Delta x}} - \Delta x \times p_N}{b_N}\right) , \end{aligned}$$
(31)
which yields a relative tilt angle \(\varPhi _G\) in radians that can be converted to degrees by multiplication by \(180/\pi \).
Stereo triangulation experiments are conducted such that \(B_4\) and \(B_8\), just as \(\varPhi _4\) and \(\varPhi _8\), are predicted based on main lens \(f_{197}\) and MLA (II.) with \(d_f \rightarrow \infty \) and \(d_f \approx 4\) m focus setting. Real objects were placed at selected depth distances \(Z_{G,\Delta x}\) calculated from this setup.
An exemplary sub-aperture image \(E_{(i,g)}\) with infinity focus setting and related disparity maps is shown in Fig. 10. A sub-pixel precise disparity measurement has been applied to Fig. 10b, d as the action figure lies between integer disparities. It may be obvious that disparities in Fig. 10b, d are nearly identical since both viewpoint pairs are separated by \(G=4\), however placed at different horizontal positions. This justifies the claim that the spacing between adjacent virtual cameras is consistent. Besides, it is also apparent that objects at far distances expose lower disparity values and vice versa. Comparing Fig. 10b, c shows that a successive increase in the baseline \(B_G\) implies a growth in the object’s disparity values, an observation also found in traditional computer stereo vision.
Table 3 lists baseline measurements and corresponding deviations with respect to the predicted baseline. This table is quite revealing in several ways. First, the most striking result is that there is no significant difference between baseline predictions and measurements using the model proposed in this paper. The reason for a 0% deviation is that objects are placed at the centre of predicted depth planes \(Z_{G,\Delta x}\). An experiment conducted with random object positions would yield non-zero errors that do not reflect the model’s accuracy, but rather our SPC’s capability to resolve depth, which depends on MLA and sensor specification. Hence, such an experiment is only meaningful when evaluating the camera’s depth resolution. A more revealing percentage error is obtained by a larger number of disparities, which in turn requires the baseline to be extended. These parameters have been maximised in our experimental setup making it difficult to further refine depth. To obtain quantitative error results, Sect. 4.3 aims to benchmark proposed SPC triangulation with the aid of a simulation tool (Zemax 2011).
A second observation is that our previous methods (Hahne et al. 2014a, b) yield identical baseline estimates, but fail experimental validation exhibiting significantly large errors in the triangulation. This is due to the fact that our previous model ignored pupil positions of the main lens such that virtual cameras were seen to be lined up on its front focal plane instead of its entrance pupil. Baseline estimates calculated according to a definition provided by Jeon et al. (2015) further deviate from our results with \(B_4 = 290.7293\) mm and \(B_8 = 581.4586\) mm. As the authors disregard optical centre positions of the sub-aperture images, it is impossible to obtain distances via triangulation and assess results using percentage errors.
Table 4
Tilt angle results \(\varPhi _G\) with 4 m focus (\(b_U>f_U\))
(a) \(\varPhi _4\) from Fig. 11b, d
(b) \(\varPhi _8\) from Fig. 11c
\(\Delta x\)
\(Z_{G,\Delta x}\) (cm)
Measured \(\varPhi _4~({}^{\circ }\))
\(\Delta x\)
\(Z_{G,\Delta x}\) (cm)
Measured \(\varPhi _8 ({}^{\circ })\)
0
384
0.0429
0
384
0.0857
1
218
0.0429
2
218
0.0857
2
152
0.0429
4
152
0.0857
4
95
0.0429
8
95
0.0857
(c) Comparison of predicted and measured \(\varPhi _G\) where \(d_f\approx 4~m\)
 
\(\varPhi _G\)
Predicted
Avg. measured
Deviation
\(\varPhi _G\hbox { }(^{\circ })\)
\(\varPhi _G\hbox { }(^{\circ })\)
\(ERR_{\varPhi _G}\) (%)
Proposed
\(\varPhi _4\)
0.0429
0.0429
0.0000
\(\varPhi _8\)
0.0857
0.0857
0.0000
Hahne et al. (2014a, b)
\(\varPhi _4\)
0.0429
−0.3427
899.3410
\(\varPhi _8\)
0.0857
−0.6852
899.2393
Whenever \(d_f \rightarrow \infty \), virtual camera tilt angles in our model are assumed to be \(\varPhi _G=0^{\circ }\). Accurate baseline measurements inevitably confirm predicted tilt angles as measured baselines would deviate otherwise. To ensure this is the case, a second SPC triangulation experiment is carried out with \(d_f \approx 4\) m, yielding images shown in Fig. 11.
Disparity maps in Fig. 11b, d give further indication that the spacing between adjacent virtual cameras is consistent. Results in Table 4 demonstrate that tilt angle predictions match measurements. It is further shown that virtual cameras are rotated by small angles of less than a degree. Nevertheless, these tilt angles are non-negligible as they are large enough to shift the \(\Delta x=0\) disparity plane from infinity to \(d_f \approx 4\) m, which can be seen in Fig. 11.
Generally, Tables 3 and 4 suggest that the adapted stereo triangulation concept proves to be viable in an SPC without measurable deviations if objects are placed at predicted distances. A maximum baseline is achieved with a short MLA focal length \(f_s\), large micro lens pitch \(p_M\), long main lens focal length \(f_U\) and a sufficiently large entrance pupil diameter.
A baseline approximation of the first-generation Lytro camera may be achieved with the aid of the metadata (*.json file) attached to each light field photograph as it contains information about the micro lens focal length \(f_s=0.025\) mm, pixel pitch \(p_p~\approx ~0.0014\) mm and micro lens pitch \(p_M~\approx ~0.0139\) mm, yielding \(M=9.9286\) samples per micro image. The accommodated zoom lens provides a variable focal length in the range of \(f_U = 6.45\)–51.4 mm (43–341 mm as 35 mm-equivalent) (Ellison 2014). It is unclear whether the source refers to the main lens only or to the entire optical system including the MLA. From this, hypothetical baseline estimates for the first-generation Lytro camera are calculated via Eqs. (20)–(22) and given in Table 5.
Table 5
Baseline estimates of Lytro’s 1st generation camera
\(f_s\) (mm)
\(f_U\) (mm)
\(B_1\) (mm)
\(B_{8}\) (mm)
0.025
6.45
0.3612
2.8896
0.025
51.4
2.8784
23.0272
Table 6
Baseline and tilt angle simulation with \(G=6\) and \(i=0\)
Setup
Prediction
Simulation
Deviation (%)
\(d_f\)
\(f_U\)
\(f_s\)
\(\overline{V_{1U}A''}\) (mm)
\(B_{G}\) (mm)
\(\varPhi _i\) (\(^{\circ }\))
\(\overline{V_{1U}A''}\) (mm)
\(B_{G}\) (mm)
\(\varPhi _i\) (\(^{\circ }\))
\(ERR_{\overline{V_{1U}A''}}\)
\(ERR_{B_{G}}\)
\(ERR_{\varPhi _i}\)
Inf
\(f_{193}\)
(II.)
240.2113
3.7956
0.0000
240.1483
3.7949
0.0000
0.0262
0.0184
\(f_{90}\)
(II.)
27.4627
1.7752
0.0000
27.4081
1.7748
0.0001
0.1988
0.0225
\(f_{193}\)
(I.)
240.2113
8.3503
0.0000
239.3988
8.3450
0.0000
0.3382
0.0635
3 m
\(f_{193}\)
(II.)
240.2113
4.2748
\(-\)0.0816
239.8612
4.2738
\(-\)0.0816
0.1457
0.0234
0.0000
\(f_{90}\)
(II.)
27.4627
1.8357
\(-\)0.0361
27.3309
1.8352
\(-\)0.0360
0.4799
0.0272
0.2770
\(f_{193}\)
(I.)
240.2113
9.4047
\(-\)0.1795
238.9043
9.3964
\(-\)0.1795
0.5441
0.0883
0.0000
1.5 m
\(f_{193}\)
(II.)
240.2113
4.9097
\(-\)0.1897
239.6932
4.9078
\(-\)0.1897
0.2157
0.0387
0.0000
\(f_{90}\)
(II.)
27.4627
1.9049
\(-\)0.0774
27.2150
1.9042
\(-\)0.0773
0.9020
0.0367
0.1292
\(f_{193}\)
(I.)
240.2113
10.8014
\(-\)0.4173
238.1212
10.7866
\(-\)0.4173
0.8701
0.1370
0.0000
Disparity analysis of perspective Lytro images should lead to baseline measures \(B_G\) similar to those of the prediction. However, verification is impossible as the camera’s automatic zoom lens, settings (current principal planes and pupil locations) are undisclosed. Reliable measurements of such require disassembly of the main lens, which is impractical in the case of present-day Lytro cameras as main lenses are unmountable.

4.3 Simulation

To obtain quantitative measures, this section investigates the positioning of a virtual camera array by modelling a plenoptic camera in an optics simulation software (Zemax 2011). Table 6 reveals a comparison of predicted and simulated virtual camera positions just as their baseline \(B_{G}\) and relative tilt angle \(\varPhi _G\). Thereby, the distance from an objective’s front vertex \(V_{1U}\) to entrance pupil \(A''\) is given by
$$\begin{aligned} \overline{V_{1U}A''} = \overline{V_{1U}H_{1U}} + \overline{A''H_{1U}} \end{aligned}$$
(32)
bearing in mind that \(\overline{A''H_{1U}}\) is the distance from entrance pupil \(A''\) to object-side principal plane \(H_{1U}\) and \(\overline{V_{1U}H_{1U}}\) separates the front vertex \(V_{1U}\) from its object side principal plane \(H_{1U}\). Simulated \(\overline{V_{1U}A''}\) are obtained by extending ray slopes \(q_{i,j}\) towards the sensor, whilst these virtually elongated rays are seen to ignore lenses and finding the intersection of \(q_{i,j}\) and \(q_{i,j+1}\).
Observations in Table 6 indicate that the baseline grows with
  • larger main lens focal length \(f_U\)
  • shorter micro lens focal length \(f_s\)
  • decreasing focusing distance \(d_f\) \((a_U)\)
given that the entrance pupil diameter is large enough to accommodate the baseline. Besides, it has been proven that tilt angle rotations become larger with decreasing \(d_f\). Baselines have been estimated accurately with errors below 0.1% on average, except for one example. The key problem causing the largest error is that MLA (I.) features a shorter focal length \(f_s\) than MLA (II.) which produces steeper light ray slopes \(m_{c+i,j}\) and hence severe aberration effects. Tilt angle errors remain below 0.3% although results deviate by only \(0.001^{\circ }\) for \(f_{90}\) and are even non-existent for \(f_{193}\). However, entrance pupil location errors of about \(\le \)1% are larger than in any other simulated validation. One reason for these inaccuracies is that the entrance pupil \(A''\) is an imaginary vertical plane, which in reality may exhibit a non-linear shape around the optical axis.
An experiment assessing the relationship between disparity \(\Delta x\) and distance \(Z_{G,\Delta x}\) using different objective lenses is presented in Table 7. From this, it can be concluded that denser depth sampling is achieved with larger main lens focal length \(f_U\). Moreover, it is seen that a tilt in virtual cameras yields a negative disparity \(\Delta x\) for objects further away than \(d_f\), which is a phenomenon that also applies to tilted cameras in stereoscopy. The reason why \(d_f \approx Z_{G,\Delta x}\) when \(\Delta x=0\) is that \(Z_{G,\Delta x}\) reflects the separation between ray intersection and entrance pupil \(A''\), which lies nearby the sensor and \(d_f\) is the spacing between ray intersection and MLA’s front vertex. Overall, it can be stated that distance estimates based on the stereo triangulation behave similar to those in geometrical optics with errors of up to ±0.33%.
Table 7
Disparity simulation and distance with \(G=6\) and \(i=0\)
Setup
Prediction
Simulation
Deviation
\(d_f\)
\(\Delta x\)
\(Z_{6,\Delta x}\) (mm)
\(Z_{6,\Delta x}\) (mm)
\(ERR_{Z_{6,\Delta x}}\) (%)
\(f_{193}\) & (II.)
\(f_{90}\) & (II.)
\(f_{193}\) & (I.)
\(f_{193}\) & (II.)
\(f_{90}\) & (II.)
\(f_{193}\) & (I.)
\(f_{193}\) & (II.)
\(f_{90}\) & (II.)
\(f_{193}\) & (I.)
Inf
0
\(\infty \)
\(\infty \)
\(\infty \)
\(\infty \)
\(\infty \)
\(\infty \)
1
978.2150
213.9790
2152.0729
978.2797
213.9573
2151.2840
−0.0066
0.0101
0.0367
2
489.1075
106.9895
1076.0365
489.1026
106.9431
1075.1177
0.0010
0.0434
0.0854
3 m
0
3001.4530
2913.5460
3001.4530
3000.8133
2923.2193
2999.3120
0.0213
−0.3320
0.0713
1
877.9068
212.1505
1429.6116
877.4653
212.0285
1427.8084
0.0503
0.0575
0.1261
2
514.1456
110.0831
938.2541
513.8952
109.9610
937.1572
0.0487
0.1109
0.1169
1.5 m
−1
15770.8729
2521.0686
15764.1482
2517.6509
0.0426
0.1356
0
1482.8768
1410.2257
1482.8768
1482.3969
1412.2221
1481.1620
0.0324
−0.1416
0.1156
1
778.0154
209.7424
1050.3402
777.8168
209.5320
1049.3327
0.0255
0.1003
0.0959
2
527.3487
113.2965
813.1535
527.0279
113.0602
811.8298
0.0608
0.2086
0.1628

5 Discussion and Conclusions

In essence, this paper presented the first systematic study on how to successfully apply the triangulation concept to a Standard Plenoptic Camera (SPC). It has been shown that an SPC projects an array of virtual cameras along its entrance pupil, which can be seen as an equivalent to a multi-view camera system. Thereby, the proposed geometry of the SPC’s light field suggests that the entrance pupil diameter constrains the maximum baseline. This backs up and further refines an observation made by Adelson and Wang (1992), who considered the aperture size to be the baseline limit. Our customised SPC merely offers baselines in the millimetre range, which results in relatively small stereo vision setups. Due to this, depth sampling planes move towards the camera, which will prove to be useful for close range applications such as microscopy. It is also expected that multiple viewpoints taken with small baselines evade the occlusion problem.
The presented work has provided the first experimental baseline and distance results based on disparity maps obtained by a plenoptic camera. Predictions of our geometrical model match measures of the experimentation without indicating a significant deviation. An additional benchmark test of the proposed model with an optical simulation software has revealed errors of up to ±0.33% for baseline and distance estimates under different lens settings, which supports the model’s accuracy. Deviations are due to the imperfections of objective lenses. More specifically, prediction inaccuracies may be caused by all sorts of aberrations that result in a non-geometrical behaviour of a lens. By compensating for this through enhanced image calibration, we believe it is possible to lower the measured deviation.
The major contribution of the proposed ray model is that it allows any SPC to be used as an object distance estimator. A broad range of applications for which stereoscopy has been traditionally occupied can benefit from this solution. This includes endoscopes or microscopes that require very close depth ranges, the automotive industry where tracking objects in road traffic is a key task and the robotics industry with robots in space or automatic vacuum cleaners at home. Besides this, plenoptic triangulation may be used for quality assurance purposes in the large field of machine vision. The model further assists in the prototyping stage of plenoptic photo and video cameras as it allows the baseline to be adjusted as desired.
Further research may investigate how triangulation applies to other types of plenoptic cameras, such as the focused plenoptic camera or coded-aperture camera. More broadly, research is also required to benchmark a typical plenoptic camera’s depth resolution against that of competitive depth sensing techniques like stereoscopy, time of flight and light sectioning.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Literatur
Zurück zum Zitat Abbeloos, W. (2010). Real-time stereo vision. Master’s thesis, Karel de Grote-Hogeschool University College. Abbeloos, W. (2010). Real-time stereo vision. Master’s thesis, Karel de Grote-Hogeschool University College.
Zurück zum Zitat Adelson, E. H., & Wang, J. Y. (1992). Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 99–106.CrossRef Adelson, E. H., & Wang, J. Y. (1992). Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2), 99–106.CrossRef
Zurück zum Zitat Bok, Y., Jeon, H. G., & Kweon, I. S. (2014). Geometric calibration of micro-lens-based light-field cameras using line features. In Proceedings of European conference on computer vision (ECCV). Bok, Y., Jeon, H. G., & Kweon, I. S. (2014). Geometric calibration of micro-lens-based light-field cameras using line features. In Proceedings of European conference on computer vision (ECCV).
Zurück zum Zitat Burger, W., & Burge, M. J. (2009). Principles of digital image processing: Core algorithms (1st ed.). Berlin: Springer.MATH Burger, W., & Burge, M. J. (2009). Principles of digital image processing: Core algorithms (1st ed.). Berlin: Springer.MATH
Zurück zum Zitat Cho, D., Lee, M., Kim, S., & Tai, Y. W. (2013). Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction. In IEEE international conference on computer vision (ICCV) (pp. 3280–3287). doi:10.1109/ICCV.2013.407. Cho, D., Lee, M., Kim, S., & Tai, Y. W. (2013). Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction. In IEEE international conference on computer vision (ICCV) (pp. 3280–3287). doi:10.​1109/​ICCV.​2013.​407.
Zurück zum Zitat Dansereau, D. G. (2014). Plenoptic signal processing for robust vision in field robotics. PhD thesis, University of Sydney. Dansereau, D. G. (2014). Plenoptic signal processing for robust vision in field robotics. PhD thesis, University of Sydney.
Zurück zum Zitat Dansereau, D. G., Pizarro, O., & Williams, S. B. (2013). Decoding, calibration and rectification for lenselet-based plenoptic cameras. In IEEE international conference on computer vision and pattern recognition (CVPR) (pp. 1027–1034). Dansereau, D. G., Pizarro, O., & Williams, S. B. (2013). Decoding, calibration and rectification for lenselet-based plenoptic cameras. In IEEE international conference on computer vision and pattern recognition (CVPR) (pp. 1027–1034).
Zurück zum Zitat Fiss, J., Curless, B., & Szeliski, R. (2014). Refocusing plenoptic images using depth-adaptive splatting. In 2014 IEEE international conference on computational photography (ICCP) (pp. 1–9). IEEE. Fiss, J., Curless, B., & Szeliski, R. (2014). Refocusing plenoptic images using depth-adaptive splatting. In 2014 IEEE international conference on computational photography (ICCP) (pp. 1–9). IEEE.
Zurück zum Zitat Georgiev, T., Zheng, K. C., Curless, B., Salesin, D., Nayar, S., & Intwala, C. (2006). Spatio-angular resolution tradeoff in integral photography. In Proceedings of Eurographics symposium on rendering. Georgiev, T., Zheng, K. C., Curless, B., Salesin, D., Nayar, S., & Intwala, C. (2006). Spatio-angular resolution tradeoff in integral photography. In Proceedings of Eurographics symposium on rendering.
Zurück zum Zitat Hahne, C. (2016). The standard plenoptic camera-applications of a geometrical light field model. PhD thesis, University of Bedfordshire. Hahne, C. (2016). The standard plenoptic camera-applications of a geometrical light field model. PhD thesis, University of Bedfordshire.
Zurück zum Zitat Hahne, C., Aggoun, A., Haxha, S., Velisavljevic, V., & Fernández, J. C. J. (2014a). Baseline of virtual cameras acquired by a standard plenoptic camera setup. In 3DTV-conference: The true vision-capture, transmission and display of 3D video (3DTV-CON) (pp. 1–3). doi:10.1109/3DTV.2014.6874734. Hahne, C., Aggoun, A., Haxha, S., Velisavljevic, V., & Fernández, J. C. J. (2014a). Baseline of virtual cameras acquired by a standard plenoptic camera setup. In 3DTV-conference: The true vision-capture, transmission and display of 3D video (3DTV-CON) (pp. 1–3). doi:10.​1109/​3DTV.​2014.​6874734.
Zurück zum Zitat Heber, S., & Pock, T. (2014). Shape from light field meets robust PCA. In Computer vision—ECCV 2014—13th European conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part VI (pp. 751–767). doi:10.1007/978-3-319-10599-4_48. Heber, S., & Pock, T. (2014). Shape from light field meets robust PCA. In Computer vision—ECCV 2014—13th European conference, Zurich, Switzerland, September 6–12, 2014, Proceedings, Part VI (pp. 751–767). doi:10.​1007/​978-3-319-10599-4_​48.
Zurück zum Zitat Hirshfeld, A. W. (2001). Parallax: The race to measure the cosmos. Hirshfeld, A. W. (2001). Parallax: The race to measure the cosmos.
Zurück zum Zitat Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), 60:1–60:12. doi:10.1145/2766922. Huang, F. C., Chen, K., & Wetzstein, G. (2015). The light field stereoscope: Immersive computer graphics via factored near-eye light field displays with focus cues. ACM Transactions on Graphics, 34(4), 60:1–60:12. doi:10.​1145/​2766922.
Zurück zum Zitat Jeon, H. G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y. W., et al. (2015). Accurate depth map estimation from a lenslet light field camera. In Proceedings of international conference on computer vision and pattern recognition (CVPR). Jeon, H. G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y. W., et al. (2015). Accurate depth map estimation from a lenslet light field camera. In Proceedings of international conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Levoy, M., & Hanrahan, P. (1996). Light field rendering. Tech. rep., Stanford University. Levoy, M., & Hanrahan, P. (1996). Light field rendering. Tech. rep., Stanford University.
Zurück zum Zitat Lumsdaine, A., & Georgiev, T. (2008). Full resolution lightfield rendering. Tech. rep., Adobe Systems Inc. Lumsdaine, A., & Georgiev, T. (2008). Full resolution lightfield rendering. Tech. rep., Adobe Systems Inc.
Zurück zum Zitat Marr, D., & Poggio, T. (1976). Cooperative computation of stereo disparity. Tech. rep., Massachusetts Institute of Technology, Cambridge, MA, USA. Marr, D., & Poggio, T. (1976). Cooperative computation of stereo disparity. Tech. rep., Massachusetts Institute of Technology, Cambridge, MA, USA.
Zurück zum Zitat Ng, R. (2006). Digital light field photography. PhD thesis, Stanford University. Ng, R. (2006). Digital light field photography. PhD thesis, Stanford University.
Zurück zum Zitat Ng, R., Levoy, M., Brèdif, M., Duval, G., Horowitz, M., & Hanrahan, P. (2005). Light field photography with a hand-held plenoptic camera. Tech. Rep. CTSR 2005-02, Stanford University Ng, R., Levoy, M., Brèdif, M., Duval, G., Horowitz, M., & Hanrahan, P. (2005). Light field photography with a hand-held plenoptic camera. Tech. Rep. CTSR 2005-02, Stanford University
Zurück zum Zitat Perwass, C., & Wietzke, L. (2012). Single lens 3D-camera with extended depth-of-field. In Human vision and electronic imaging XVII, Raytrix GmbH, Proceedings of the SPIE (vol. 8291). doi:10.1117/12.909882. Perwass, C., & Wietzke, L. (2012). Single lens 3D-camera with extended depth-of-field. In Human vision and electronic imaging XVII, Raytrix GmbH, Proceedings of the SPIE (vol. 8291). doi:10.​1117/​12.​909882.
Zurück zum Zitat Tao, M. W., Srinivasan, P. P., Hadap, S., Rusinkiewicz, S., Malik, J., & Ramamoorthi, R. (2017). Shape estimation from shading, defocus, and correspondence using light-field angular coherence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(3), 546–560. doi:10.1109/TPAMI.2016.2554121.CrossRef Tao, M. W., Srinivasan, P. P., Hadap, S., Rusinkiewicz, S., Malik, J., & Ramamoorthi, R. (2017). Shape estimation from shading, defocus, and correspondence using light-field angular coherence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(3), 546–560. doi:10.​1109/​TPAMI.​2016.​2554121.CrossRef
Zurück zum Zitat TRIOPTICS. (2015). MTF measurement and further parameters. TRIOPTICS. (2015). MTF measurement and further parameters.
Zurück zum Zitat Wheatstone, C. (1838). On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions, 128, 371–394.CrossRef Wheatstone, C. (1838). On some remarkable, and hitherto unobserved, phenomena of binocular vision. Philosophical Transactions, 128, 371–394.CrossRef
Zurück zum Zitat Yang, Y., Yuille, A. L., & Lu, J. (1993). Local, global, and multilevel stereo matching. In IEEE international conference on computer vision and pattern recognition (CVPR) (pp. 274–279). doi:10.1109/CVPR.1993.340969. Yang, Y., Yuille, A. L., & Lu, J. (1993). Local, global, and multilevel stereo matching. In IEEE international conference on computer vision and pattern recognition (CVPR) (pp. 274–279). doi:10.​1109/​CVPR.​1993.​340969.
Metadaten
Titel
Baseline and Triangulation Geometry in a Standard Plenoptic Camera
verfasst von
Christopher Hahne
Amar Aggoun
Vladan Velisavljevic
Susanne Fiebig
Matthias Pesch
Publikationsdatum
20.08.2017
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 1/2018
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-017-1036-4

Weitere Artikel der Ausgabe 1/2018

International Journal of Computer Vision 1/2018 Zur Ausgabe