Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-accuracy calibration of line-structured light vision sensor by correction of image deviation

Open Access Open Access

Abstract

The structured-light vision sensor calibration approach based on planar target fails to generate reliable and accurate results due to the inaccurate localization of feature points in outdoor complex lighting environments. To address this issue, a novel high-accuracy calibration method that corrects image deviation is proposed for line-structured light vision sensor in this paper. The mathematical solution for stripe point location uncertainty is put forward and the location uncertainty of target feature points and stripe points is established. Moreover, the location deviation of all points is computed through large-scale nonlinear multi-step optimization based on the constraints of uncertainty. After compensation, the planar target based calibration method is adopted to solve the light plane equation. Both simulative and physical experiments have carried out to evaluate the performance of the proposed method and the results show that the proposed method is robust under large feature points localization deviation and can achieve the same measurement accuracy as the planar target calibration method under ideal imaging conditions, which reduces the requirements of the calibration environment, and has important practical engineering application value.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured light vision sensors are important devices in acquiring three-dimensional data and have such advantages as large range, non-contact, high speed, and high accuracy. These sensors are widely used in various measurement fields [1–10], including large-scale surface acquisition [1–4], 3D inspection [5], microscopic [6], inner surface reconstruction [7] and online wear measurement [8–10]. Where many online dynamic measurement applications, such as wheel size [8] and pantograph wear [9] and rail wear [10] measurement play an important role in maintaining the safe operation of railways transportation. As for vision measurement, high-precision calibration determines high-precision measurements, and high-precision calibration requires high quality imaging. However, the online measurement mentioned above is most in outdoor industrial environment. In some complicated environments, the calibration images quality cannot be ensured by engineering methods, such as on-line wheel size inspection system, installed on the railway main line, along with short calibration time, complex and diverse lighting environment, it is impossible to get the ideal images as indoor. While sensor calibration in the on-site environment is easily affected by layout, lighting, image noise etc. Especially for limitted calibration space and no covering, meeting ideal imaging conditions is difficult, thereby leading to low calibration accuracy and restrict measurement accuracy.

Generally, the structured light vision sensor calibration process [11] includes camera intrinsic and light plane calibration. Many studies have proposed camera intrinsic calibration, such as the methods based on 3D targets [12], 2D targets [13], and 1D targets [14] and grid ball [15]. The Zhang method [13] is a flexibility 2D technique based on planar target, which is used for on-site high precision calibration. With the initial results obtained using the Zhang method, Liu et al. [16] considered the deviation of perspective projection, lens distortion, and out-of-focus radius as location uncertainty of feature points and optimized the intrinsic and extrinsic parameters and the target feature points coordinates simultaneously. Finally, the optimized camera intrinsic parameters were obtained. Correspondingly, Liu et al. [17] established the model of uncertainty for each image feature point. The nonlinear optimization method was used to solve the location deviation of each feature point because the homography matrix was constrained between the target plane and the image plane. Then, the coordinates of feature points were compensated, and a second calibration was implemented using the Zhang method [13], thereby improving calibration accuracy.

Simultaneously, many approaches for light plane calibration are accessible. Dewar [18] proposed a method that is based on wire drawing for the spot uneven brightness distribution or highlight, in which the correspondence of aimed bright spots and the ones in images is difficult to establish. Therefore, the number of calibration points is small, and the calibration accuracy is low. Huynh et al. [19] proposed cross ratio (CR) invariant theory. With use of at least three collinear points of known coordinates in the three-dimensional target, the intersection points of structured light stripe and line target points are calculated through CR. This method can obtain the calibration point on the light plane with high accuracy and is suitable for on-site calibration. However, this method requires a high-accuracy three-dimensional target of at least two planes perpendicular to each other. In addition, high-quality images are difficult to obtain due to the light shielding between planes. Meanwhile, the number of calibration points is limited, and the feasibility is low. Wei et al. [20] proposed a calibration method that is based on a 1D target. Based on the 3D coordinates of the intersection point of the light plane and the 1D target, the light plane equation is obtained by fitting multiple intersections. In recent years, the spatial geometry as a secondary constraint has been utilized for light plane calibration in complex outdoor environments; the camera is equipped with an optical filter. Liu et al. [21] proposed a structured light calibration method that is based on a ball. This method needs to extract the edge of the contour of the ball to obtain the orientation of the ball target in the camera coordinate system. The combination of the contour established by the light plane and the ball leads to the solution of the light plane equation, which has no target placement angle. The contour of the target is extracted accurately, which is vulnerable to background or outside interference. Additionally, Liu et al. [22] calibrated the optical parameters, when the camera is equipped with a filter in a complex environment and is using a double parallel-cylinder target. On the basis of the correspondence between the parallel ellipse plane and the image plane produced by the intersection of the two cylinders, the plane equation is optimized by taking the constraint of the cylindrical minor axis length. To improve calibration accuracy and expand engineering applications, a calibration method that is based on planar metal target inlaid with LED lights is proposed, which is named as planar method [11], where the Plucker’s equation to represent the stripe lines. Through CR invariance, the calibration points in the light plane are obtained, the 3D coordinates of the calibration points in the light plane are acquired from the repetitive movement of the target, and the light plane equation is fitted. Compared with the methods [18–20] of using fewer calibration points, this method [11] effectively improves the calibration accuracy. The method is widely used for high-accuracy measurement [8–10] due to its low cost, flexibility, and high accuracy. However, even when calibrated directly under complex outdoor lighting conditions, it still cannot eliminate the error caused by the feature points’ location deviation; thus, high-accuracy calibration is not achieved.

Based on the above analysis, current calibration methods are applicable to indoor, and there is no high-precision calibration method for structured light vision sensors suitable for low-quality imaging in outdoor complex environments. Meanwhile, traditional planar target based calibration methods practically utilized the extraction of target feature points and stripe points as the actual coordinates for calculation. However, many inevitable factors are experienced in outdoor environment, such as complex lighting, laser quality, roughness of target surface, target placement angle, poor focus, image noise, inducing feature and stripe points center extraction bias, which is named as image deviation. If the on-site calibration only ensures the imaging quality of target feature points, then causing the brightness of the stripes to decrease or thicken is easy. If only the stripe imaging is considered, then the target feature points tend to be out of focus or over exposed. Consequently, these two aspects are contradictory and thus cannot satisfy the best imaging quality at the same time. To address this issue, the high-accuracy calibration of line-structured light vision sensor through the correction of image deviation is proposed in this paper. First, the mathematical solution for stripe point location uncertainty is put forward and the uncertainty of the target feature points and the stripe points is established. The location deviation of each point is solved through large-scale nonlinear multi-step optimization due to the constraints of uncertainty. After compensation, the planar target calibration method [11] to is adopted solve the linear solution of the light plane equation. Finally, the maximum likelihood solution of the light plane equation is obtained through nonlinear optimization.

The remainder of this paper is organized as follows. The algorithm principle and implementation details are described in Section 2. Section 3 presents the experiments. Finally, the conclusion is drawn in Section 4.

2. Algorithm principle

The line-structured light vision sensor consists of a camera and a linear laser projector. The spatial distribution is shown in Fig. 1. Ocxcyczcis regarded as the camera coordinate frame (CCF), Otxtytztas the target coordinate frame (TCF), πtfor the target plane, πfor the laser plane, and Li is the intersection line of πt and π. li is the imaging of Li in the image plane. Q1j,Q2j,Q3jdenote the three luminous points on the same column of target. Qjl is the intersection of the column target points line, and Li is used to solve the following light plane equation. p1j, p2j, p3j are the image points that corresponds to Q1j, Q2j, Q3j, respectively, which are defined as the target feature points. pjl for the image points corresponds to Qjl, which is defined as the stripe points. Hi is the homography mapping matrix from the target to the image plane. T1 represents the transformation matrix of the light plane equation to CCF.

 figure: Fig. 1

Fig. 1 Calibration schematic of structured light vision sensor.

Download Full Size | PDF

This study presents a high-accuracy and robust line-structured light vision sensor calibration method that corrects image deviation. The specific implementation steps are as follows.

Step 1: The camera intrinsic matrix K and distortion coefficients k are solved through the Zhang method [13].

Step 2: The planar target with light-emitting feature points is moved and intersected with the laser plane, thereby enabling the feature and stripe points to be imaged at the same time.

Step 3: The multi-scale method is adopted to extract the feature point center many times, and the localization uncertainty σu, σv of each target feature point is solved.

Step 4: Using cross-ratio invariant property, the intersection point of each target feature point line and the stripe is solved as the stripe calibration point. The location uncertainty model of stripe point is established, and the location uncertainty σul, σvl of each stripe point is solved.

Step 5: The target feature point and the stripe point uncertainty that are obtained in Steps 3 and 4 are taken as constraints. The minimum distance between the forward projection of the stripe point and the target and the homography mapping of target and feature points are established as objective functions. Further, the localization deviation of target feature pointΔuij,Δvijand Δuikl,Δviklof stripe point are obtained using large-scale nonlinear multi-step optimization.

Step 6: After compensating the target feature points and the stripe points using the location deviation obtained in Step 5, the light plane equation is computed using [11], and the maximum likelihood solution is obtained through nonlinear optimization.

2.1 Image uncertainty model

In the calibration of a structured light vision sensor that is based on planar target, the accurate extraction of feature points and stripe points is critical to guaranteeing the accuracy of subsequent calibration. This section mainly analyzes the uncertainty model and the solution of the feature and stripe points. The metal light-emitting point planar target is considered an example for illustration. As shown in Fig. 2, this planar target is the location uncertainty diagram of the target feature and stripe points.

 figure: Fig. 2

Fig. 2 Location uncertainty diagram of target feature points and stripe points. (a) Line-structured light calibration image; (b) Target feature point; (c) Stripe point; (d) Grayscale distribution of target feature point; (e) Grayscale distribution of stripe point; (f) Location uncertainty of target feature point; (g) Location uncertainty of stripe point.

Download Full Size | PDF

2.1.1 Uncertainty of target feature points

In this paper, the uncertainty of target feature points and stripe points are analyzed, and the components of the feature point location uncertainty are obtained. If no special instructions are given, then the target feature points and stripe points are referred to as the feature points collectively. Usually, to ensure calibration accuracy, we attempt to place the target vertical to the image plane, which is prone to inducing feature point over-exposure and submerged image noise, especially in outdoor complex lighting environment. Calculating the target feature points’ location uncertainty by analyzing the image noise is invalid [23]. Therefore, we use a multi-scale method to obtain the initial coordinates of the target feature points and estimate uncertainty through statistical approach. Meanwhile, the coordinate corresponding to the best scale is selected as the initial value of the feature point center. We select m different Gaussian convolution kernels to process the feature point j at image i in forming point set Pm. As shown in Fig. 2(f), the red cross points is in the feature point region, where the average coordinate p˜ij=[u˜ij,v˜ij,1]T of Pm, the standard deviation σu and σv in u and v directions are solved as the location uncertainty of the target feature point.

2.1.2 Solution to uncertainty of stripe points

On-site calibration is easily affected by laser quality, power, target surface material, and external light, which cause the stripe to show unevenness in width and brightness and induce stripe center extraction errors. In [23], the uncertainty of the vertical stripe point in the horizontal direction is provided. In this study, the uncertainty of the stripe points along the arbitrary direction is solved.

gu, gv, guu, guv, gvvare set as different partial derivatives of image convolution with Gaussian kernels. The normal vector n(u,v)of the stripe point is derived by calculating the Hessian matrix. (u0,v0)are set as the coordinates of the stripe point. The stripe direction is represented with (nu,nv), (nu,nv)=1, and the orthogonal direction is denoted by (nu,nv). Therefore, the cross-sectional gray of the stripe can be expressed with Taylor expansion as

f[(tnu+u0),(tnv+v0)]=g(u0,v0)+tnugu(u0,v0)+tnvgv(u0,v0)+12t2nu2guu(u0,v0)+t2nunvguv(u0,v0)+12t2nv2gvv(u0,v0).

For the edge point, f[(tnu+u0),(tnv+v0)]t=0.Then,

t=nugu+nvgvnu2guu+2nunvguv+nv2gvv.

Therefore, the maximum of the gray image is the coordinates of the stripe center(pu,pv)=((tnu+u0),(tnv+v0)). As shown in Fig. 2(e), given that the stripe point center corresponding to the noise-free image is the origin (0,0), the coordinate system orc is set up with axes (nu,nv) and (nu,nv). The gray curve in (nu,nv) is h(c,r), so the uncertainty of (pu,pv) is converted into h(c,r)r=0 analysis.

h(c,r)=I(c,r)+N(c,r), where I(c,r) is the ideal image and N(c,r) is the image noise with mean zero and variance σN2. According to C. Steger [24], the first derivative in h(c0,r0) is zero.

F˙h^(c0,r0)=F˙I^(c0,r0)+F˙N^(c0,r0)=0,
whereh^, I^, N^are actual gray, ideal image, and image noise convolution with Gaussian kernel varianceσg2, respectively. F˙h^(c0,r0), F˙I^(c0,r0), F˙N^(c0,r0), F¨h^(c0,r0), F¨I^(c0,r0), F¨N^(c0,r0) are the first- and second-order derivatives. The Taylor expansion of F˙h^(c0,0) in (0,0) is

F˙h^(c0,r=0)=F˙I^(0,0)+F˙N^(0,0)+(F¨I^(c0,r0)(0,0)+F¨N^(0,0))c0+O(c0)=0.

The ideal image can be expressed asI(c,0)=Mec22σw2, where Mrepresents the maximum gray value and σwrepresents the Gaussian kernel. Thus, F˙I^(0,0)=0,F¨N^(0,0)<<F¨I^(0,0). The center point c0 can be expressed as

c0F˙N^(0,0)/F¨I^(0,0).

According to [23,24], the corresponding varianceσ^N2to F˙N^(0,0)is

σ^N2=σN28πσg4.

From Eqs. (5) and (6), the location variances σc02 of c0 is

σc02=σ^N2F¨I^(0,0)2.

According to [23], we obtain

F¨I^(0,0)=Mσw(σg2+σw2)3/2.

Substituting Eq. (8) into Eq. (7), σc02can be described as

σc02=(σw2+σg2)38πσg4σw2(σN2M2).

Among them,σw is fitted by the gray value in ncdirection with MATLAB function fittype, where the prototype of fitting function is Fl=Ae(xμ)22σw2. When σc02 is decomposed into coordinate system ouv, the uncertainty components σul and σvl of the stripe point are drawn. However, in high reflective cases, the stripe images showing overexposure and Eq. (9) for solving the stripe point uncertainty is invalid. So the multi-scale statistical method is used to obtain the localization uncertainty of stripe points. Letmdifferent Gaussian convolution kernels be substituted into Steger algorithm [24] to get the stripe center point listPml, the standard deviation σul and σvl are solved as the location uncertainty of the stripe point.

2.2 Feature points relocation

On the basis of the analysis of the location uncertainty of feature points in Sections 2.1.1 and 2.1.2, we propose a mapping between feature point and target in space. The back-projection of target image points to target plane, and its corresponding space target distance is minimum. Likewise, the stripe point is back-projected to the target plane, and the distance between the 3D light plane point and the target plane is at the minimum. Then, the objective functions mentioned above are established on the basis of the constraints of the uncertainty of the feature points. The location deviation of the feature points is obtained using large-scale nonlinear multi-step optimization.

2.2.1 Perspective model of planar target

No-distortion point, distortion, and the actual imaging point coordinates are set as pu=[uu,vu,1]T, pd=[ud,vd,1]T, and pn=[un,vn,1]T, respectively. In the imaging optical path, the imaging of target point Q=[x,y,1]T can be decomposed into three processes. The first one is perspective projection, the second is lens distortion, and the third is image noise superposition. The perspective projection can be represented by

ρpu=K[r1r2t][xy1]=H3×3[xy1],
where H3×3 is the homography matrix between the target and image planes, ρ is the constant coefficient, K is the camera intrinsic matrix, u0andv0 for the principal point, and γ is the tilt factor of the image coordinate axes u and v. Moreover, r1, r2, tare the first two columns of the rotation matrix and the translation vector. The lens distortion model can be expressed as

ud=uu+(uuu0)(k1r2+k2r4)vd=vu+(vuv0)(k1r2+k2r4).

Among them, r=xn2+yn2, k1, k2 are the first two lens distortion coefficients, and [xn,yn]T represents the normalized image coordinates. According to practical experience, the first two radial aberrations are sufficient for describing the lens distortion. Image deviation Δu,Δvis induced by image noise. The distortion point to the actual imaging point can be expressed as

pn=[unvn1]=[ud+Δuvd+Δu1].

Under the i position of the target, j point in TCF is regarded as Qj=[xj,yj,1]Tand the image coordinate as pij=[u^ij,v^ij,1]T. pu(ij) is solved usingpij through Eqs. (11) and (12). With pu(ij) and Qj=[xj,yj,1]T, we calculate Hi using Eq. (10). The initial value Δuij,Δvij in Eq. (12) is zero. Accordingly, Δuikl,Δvikl are the location deviation of the stripe points and have the same compensation characteristics.

2.2.2 Calculation of location error

The mapping matrix Hi of the target plane in i position to the image plane shows that the projection point pn(ij) corresponding to target point Qj is obtained using Eq. (10). The first objective function established with the minimum distance of pij to pn(ij), and the statistical center of the image is as follows:

e1=min{i=1Mj=1ND(pij,pn(ij))+|i=1Mj=1Npiji=1Mj=1Npn(ij)|},
whereD(pij,pn(ij)) represents the distance function, M represents the number of target placement, and N is the number of target points.

Using Eq. (13), the homogeneous coordinatesQ˜ij=[x˜ij,y˜ij,1] is calculated by back-projecting pij to TCF. The second objective function is established by minimizing the distance between Qj and Q˜ij, as well as the statistical center of the projection points.

e2=min{i=1Mj=1NDist(Qj,Q˜ij)+|j=1NQji=1Mj=1NQ˜ij|}.

The cross ratio as invariable constraints in projective geometry is used to optimize the image points using target rigid constraints, which are widely used in aspects such as camera intrinsic calibration. In this work, the image coordinates and target are utilized to establish the cross ratio. The third objective function is as follows:

e3=min{i=1N{j=1K|(CRh(Qj,Q(j+1);Q(j+2),Q(j+3))CRh(pu(ij),pu(i(j+1));pu(i(j+2)),pu(i(j+3)))|+j=1L|(CRv(Qj,Q(j+1);Q(j+2),Q(j+3))CRv(pu(ij),pu(i(j+1));pu(i(j+2)),pu(i(j+3)))|}},
whereCRh and CRv represent the horizontal and vertical cross ratios, respectively. Notwithstanding the high constraint and considering computational efficiency, we select four points arbitrarily and ensure that the selected points evenly cover the entire image plane.

The projection of the stripe pointpijl is intersected with the target at the space 3D point Q^ij, and the distance from the target plane fitted with Q˜ij is minimum. The fourth objective function is established as

e4=min(i=1Mj=1NG(Q^ij,Fi)),
where Fi is the plane equation fitted in iposition points Q˜ij, and G(Q^ij,Fi) is the distance between point and plane.

At the same time, thekcolumn target points of Q˜ij are fitted as a line lkt, and Q^ij for l˜i. The intersection point of lkt and l˜i is c˜i, so the fifth objective function with the minimum distance Q^ij and c˜i is described as

e5=min(i=1Mj=1NDist(Q^ij,c˜i)).

By combining five objective functions Eqs. (13)-(17), we obtain

E(a)=e1+e2+e3(1)+e4+e5(2)(3),

where a=(Δuij,Δvij,Δuikl,Δvikl), and the optimization constraints are appended as

{UΔuijΔuijUΔuijUΔvijΔvijUΔvijUΔuijlΔuiklUΔuijlUΔvijlΔviklUΔvijl.

Among them,UΔuij=nσu(ij),UΔvij=nσv(ij), UΔuikl=nσu(ik)l,UΔvikl=nσv(ik)l, σu(ij), andσv(ij)are the location uncertainty of the target feature points in i position and j point. σu(ij)l and σv(ij)l for stripe points in i position and k point. In addition, n is a non-zero scale factor and set as 9 with respect to experience. In Eq. (18), the first three objective functions are used to constrain and optimize the location deviation of target feature points, and the rest are for stripe points. If there is no constraint, the objective function cannot completely converge to the optimal value, and it is easy to fall into local minimum. Based on the constraint of the location uncertainty calculated in Section 2.1, the above five objective functions can avoid falling into local minimum and can converge well. According to the practical experience, due to the uneven distribution of target or stripe points in actual image, especially over-exposure or noise interference, in order to improve the convergence speed, we usually adopt multi-step optimization in physical experiment. First perform steps (1) and (2) in Eq. (18) respectively, and next proceed step (3) to complete overall optimization.

2.3 Light plane function

On the basis of the nonlinear optimization in Section 2.2, the location deviation of each feature point is obtained. After compensation, the accurate feature point location coordinates are established. The stripe points are mapped into a 3D space through the homography matrix determined by the target and image planes. The Ransac method [25] is adopted to remove the coarse points, and the initial value of the light plane is solved using the least squares method. Finally, the maximum likelihood solution of the light plane is obtained through nonlinear optimization.

Qij is set as target points and its corresponding image point p˜ij as the un-distortion. H˜i is the homography matrix between the target and image planes. p˜ijl and Qijl are the stripe point in the image and the TCF, respectively. Qikl is the intersection point of the light and target planes in CCF. Then, the image point and the target point meet sp˜ij=H˜iQij, wheresis a non-zero coefficient. We can thus obtain

Qikl=sH˜i1p˜ikl,
where H˜i is decomposed into rotation matrix R˜i and translation vector t˜i. Then,

Q^ikl=QiklR˜i+t˜i.

The light plane equation is taken asax+by+cz+d=0, where (a2+b2+c2)=1. Then, the stripe plane points Q^ikl=[xik,yik,zik]Tin CCF is substituted into the least squares method with Ransac constraint to obtain the first solutiona^,b^,c^,d^. The maximum likelihood solution of π is obtained through nonlinear optimization.

3. Experiment and evaluation

To verify the effectiveness of our proposed method, we conducted simulation and physical experiment, respectively. Simulation experiment is mainly used to verify the performance of our method and pre-planar method [11] under large image noise. The physical experiment is used to compare the accuracy of our method with pre-planar method [11] in calibration and measurement, which proves our approach being suitable for calibration under outdoor complex lighting conditions.

3.1 Simulation

In the simulation, we mainly analyze the influence of target feature points and stripe point error on calibration accuracy to guide practical application. The configuration parameters are as follows: focal length fx = fy = 2181, principle pointu0=1024andv0=544, and the first two distortion coefficients k1=0.45and k2=0.25. The number of target dots is 10×10, interval 10 mm, and the light plane function is 0.3241x+0.9252y0.1976z+213.4142=0. In order to evaluate the influence of various factors on calibration results, we add Gaussian white noise with larger standard deviation 5.0-pixel than conventional methods deviation 1.0-pixel. The root mean square errors (RMSEs) of the light plane parameters results compared with the real value are considered to evaluate the performance of the prosed method and pre-plane method.

3.1.1 Influence of target feature point error on calibration

On-site environments with complex lighting conditions do not allow or cannot guarantee the clear imaging of a target. For example, poor focus, blur, tilting, reflection or image noise, and lead extraction error of feature points induce calibration error. Therefore, this study focuses on the analysis of the influence of the target feature point location error on calibration accuracy. In the simulation, the location error of target feature points is added at 0.0–5.0-pixel and 0.5 intervals Gaussian white noise, and stripe points with 0.5 pixels location error. Each noise level is independently repeated 100 times, and the RMSEs results of each light plane parameter of the pre-planar method and proposed method are shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Analysis of calibration accuracy under different target feature points’ noise levels. (a) RMS error of a with different target feature points’ noise; (b) RMS error of b with different target feature points’ noise; (c) RMS error of c with different target feature points’ noise; (d) RMS error of d with different target feature points’ noise.

Download Full Size | PDF

As shown in Fig. 3, the localization deviation of the target feature points increases, and the RMSEs of two methods increase synchronously. The RMSEs results show that the pre-planar method increases linearly with increasing noise, which highlights the sensitivity to feature point error. At the same time, in the proposed method, after optimization and re-calibration, the calibration error shows only a moderate increase. The error of the proposed method is ten times smaller than that of pre-planar method. Thus, our method is robust in complex environments and can effectively resist the calibration error caused by target feature point error, especially with large image noise.

3.1.2 Influence of stripe point error on calibration

On the basis of the analysis in Section 3.1.1, this section mainly discusses the stripe point location error. In the simulation experiment, the location error of the stripe points is added at 0.0–5.0-pixel and 0.5 intervals Gaussian white noise, and target feature points with 0.5 pixel location error. Each noise level is independently repeated 100 times, and the RMSEs results of each light plane parameter of pre-planar method and proposed method are shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Analysis of calibration accuracy under different stripe points’ noise levels. (a) RMS error of a with different stripe points’ noise; (b) RMS error of b with different stripe points’ noise; (c) RMS error of c with different stripe points’ noise; (d) RMS error of d with different stripe points’ noise.

Download Full Size | PDF

As shown in Fig. 4, the comparison shows that pre-planar method increases linearly with increasing noise and is thus sensitive to stripe point error. At the same time, the proposed method can solve the location error of each feature point. Consequently, its calibration error is more subtle than that of the pre-planar method. Compared with the pre-planar method, the RMSEs of proposed method is eight times smaller. Thus, the proposed method can achieve high-accuracy calibration even with large stripe point error.

3.2 Real experiment

To evaluate the validity of the proposed method, we compare the effectiveness of the proposed method and the pre-planar method [11]. The camera in the structured light sensor is AVT NIR GT2000, Schneider 12 mm lens, image resolution 2048×1088 pixels, view field430mm×230mm, and cooperated with an 808 nm high-power laser projector. As shown in Fig. 5, the camera is the on-site structured light vision sensor and calibration target. The metal target is embedded with LED lights, the number is7×8, the interval is 12 mm, and the machining precision is 0.02 mm.

 figure: Fig. 5

Fig. 5 On-site structured light vision sensor and calibration target. (a) Line-structured light vision sensor applied in wheel size measurement system installed in outdoor railway. (b) Metal plane target embedded with LED.

Download Full Size | PDF

Figure 6 shows the ideal structured light calibration images captured in shelter. The target feature points and the stripes are relatively clear, with uniform brightness and width and high signal-noise ratio, and the high-accuracy calibration is easy to realize. However, compared with the emitting points in Fig. 7 (structured light calibration images captured under complex outdoor lighting conditions), those images in Fig. 7 are out of focus or have low brightness because of the influence of external stray light interference and metal target reflection. These phenomena seriously affect extraction accuracy, thereby aggravating calibration accuracy.

 figure: Fig. 6

Fig. 6 Ideal images captured in shelter.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Structured light calibration image captured outdoors under complex lighting conditions. Three kinds of situations are present: the first row is the uneven width of stripe, the second row is the low brightness of the target feature points, and the third row is the target reflective condition.

Download Full Size | PDF

Figure 8 shows the feature points location deviation correction results under complex lighting conditions, including strong reflective and over-exposure conditions, stripe and feature points intersect each other, low and uneven brightness, stray light interference and high brightness background conditions. Where the green cross points are pre-corrected feature points and red cross points are corrected ones. Green asterisk points are pre-corrected stripe points, red assterisk points are corrected ones. In Fig. 8, it can be seen that even if there are feature points with large deviations, the proposed method can still correct them to the precise coordinates. The precise coordinates are the extraction coordinates of target points and stripes under ideal conditions (no external interference), whose location accuracy is close to the true value. In order to illustrate the effectiveness of the proposed method, we collect ideal images and external interference images at the same position of the target, including four common imaging situations, over-exposure, high reflective, points and stripe interference, low brightness and set the coordinates from ideal images as the true value. Compared the extraction accuracy of the proposed method and the previous method [11] with respect to the true value, the statistical results are shown in Tables 1 and 2, where the proposed method can converge closer to the true value than the previous method.

 figure: Fig. 8

Fig. 8 Feature points location deviation correction results under complex lighting conditions. Green cross points are pre-corrected feature points, red cross points are corrected ones. Green asterisk points are pre-corrected stripe points, red assterisk points are corrected ones. (a) Correction results of target feature points under reflective conditions; (b) Correction results of target and stripe feature points under strong reflective and over-exposure conditions; (c) Correction results of target and stripe feature points under stripe and feature points intersect each other; (d) Correction results of target and stripe feature points under low and uneven brightness; (e) Correction results of target and stripe feature points under stripe over-exposure and stray light interference; (f) Correction results of target and stripe feature points under high brightness background.

Download Full Size | PDF

Tables Icon

Table 1. Deviation of Target Feature Points Before and After Correction

Tables Icon

Table 2. Deviation of Stripe Points Before and After Correction

Section 3.2 includes the calibration results’ repeatability analysis and the outdoor physical experiment, where the calibration results are applied in the on-line wheel size measurement system. The calibration methods are divided into four categories: the pre-planar method with ideal images (Calibration with Ideal Images; CII), the pre-planar method under complex outdoor lighting conditions (Calibration with Non-ideal Images; CNII), the proposed method with ideal images (Calibration with Ideal Images Based on Deviation Correction; CIIDC), and the proposed method under complex outdoor lighting conditions (Calibration with Non-ideal Images Based on Deviation Correction; CNIIDC).

3.2.1 Measurement accuracy evaluation

The intrinsic parameters of the camera are calibrated through the Zhang method [13]. The results arefx=2315.8488,fy=2312.4282,u0=1004.8597,v0=555.3317,k1=0.1267, andk2=0.1204. Meanwhile, the light plane is solved by different methods consecutively, including CII, CNII, CIIDC, and CNIIDC. The results are listed in Table 3.

Tables Icon

Table 3. Line-Structured Light Vision Sensor Parameters Calibrated by Different Methods

The LED planar target is placed in four positions arbitrarily. Thus, the intersection points of the light stripe and the grid line of the LED planar target in the horizontal direction are calculated. At each position, four distances between the first intersection point and the rest are established as the measured distance. Following the cross-ratio invariability principle, the local coordinate of these intersection points in the LED planar target coordinate is solved, and their distances are regarded as the gauged distance. The RMS errors of the measured distance and gauged distance are obtained via different methods. As shown in Table 4, the RMS error of CII is 0.0475 mm, and the RMS error of CNII can reach 0.2282 mm. Nevertheless, the RMS error of CIIDC (CII handled by the proposed method) is 0.0381 mm. A slight improvement is observed in CII, which demonstrates that our method can improve the measurement accuracy under ideal imaging conditions. Analogously, the RMS error of CNIIDC is 0.1036 mm, which is considerably more than that of CNII (0.2282 mm).

Tables Icon

Table 4. Distance Result of Calibration Precision via Different Methods (mm)

Moreover, to further evaluate the measurement accuracy, we utilize the calibrated line-structured light vision sensor to measure the diameter of a cylinder, which is shown in Fig. 9. The radius of the cylinder is 30 mm, and the machining accuracy is 0.02 mm.

 figure: Fig. 9

Fig. 9 Two-cylinder target used for measurement accuracy evaluation.

Download Full Size | PDF

The cylinder target is placed at nine positions arbitrarily, and four different methods are employed to implement the same measurement work with the same stripe images except light plane functions. In Table 5, the measured value and error are listed. The RMS error of CIIDC is 0.0607 mm, and corresponding improvements are observed in comparison with CII (0.0718). Thus, the proposed method can improve the measurement accuracy, albeit not remarkably, under ideal imaging conditions. By comparing CNII and CNIIDC, huge improvements in error are observed from 0.3664 mm to 0.0725 mm. Under complex outdoor lighting conditions, our approach shows absolute advantage and can achieve the same measurement accuracy as under ideal imaging conditions.

Tables Icon

Table 5. Cylinder Radius Result of Calibration Accuracy via Different Methods (mm)

3.2.2 Repeatability analysis

The computational uncertainty of the calibration parameters indicates the robustness of calibration method. To verify the robustness of the proposed method, a set of structured light vision sensor was selected to capture 20 ideal images in shelter and 20 non-ideal images under outdoor lighting conditions. For CII and CIIDC calibration methods, we take 10 images from the 20 ideal images arbitrarily and repeat calculation 20 times and calculate the distribution of each parameter. Similarly, for CNII and CNIIDC methods, we take 10 images from the non-ideal images and repeat calculation 20 times to analyze the distribution of parameters. Figure 10 shows the uncertainty distribution of each parameter with different calibration methods.

 figure: Fig. 10

Fig. 10 Distribution of parameters with four calibration methods. (a) The distribution of light plane parameter a with four calibration methods; (b) The distribution of light plane parameter b with the four calibration methods; (c) The distribution of light plane parameter c with four calibration methods; (d) The distribution of light plane parameter d with four calibration methods.

Download Full Size | PDF

As shown in Fig. 10, for CII adopting ideal images, its distribution is concentrated with good stability and high accuracy. Moreover, CIIDC owns the same characteristic of concentrated distribution. The CNII method uses images taken outdoors under complex light conditions and is directly calibrated by the pre-planar method [11]. Due to the interference of light, there is a large deviation for feature points, so the calibration results show more dispersed, low accuracy, and greater deviation from the true value; On the contrast, CNIIDC method using outdoor images under complex lighting conditions, the positioning deviation of feature points is obtained through optimization. After the compensation, the more precise light plane parameters are obtained, which can effectively reduce the positioning deviation of the image feature points. Its calibration results are also more concentrated in distribution, and equal to true value.

3.2.3 Practical application

To verify the measurement accuracy of the proposed method in practical application, we adopt CII, CNII, CIIDC, and CNIIDC for on-site structured-light calibration and wheel size measurement in a railway line. The outdoor calibration and measuring condition is shown in Fig. 11. We choose two important parameters, namely, tread wear and flange thickness as the analysis objects. We use the true value of the geometric parameters measured by CII because the moving wheel geometrical dimensions cannot be obtained.

 figure: Fig. 11

Fig. 11 Online dynamic wheel size measurement in outdoor railway line.

Download Full Size | PDF

The accurate acquisition of wheel cross profile is a prerequisite for accurate measurement of wheel size. The wheel cross profiles with different calibration methods are presented in Fig. 12. The profiles from CII, CIIDC, and CNIIDC are close to each other, while a large deviation of profile reconstructed by CNII is seen.

 figure: Fig. 12

Fig. 12 Wheel cross profiles with different calibration methods. (a) Wheel cross profile of freight train. (b) Wheel cross profile of passenger train.

Download Full Size | PDF

To further evaluate the accuracy of different methods in actual measurement, we select an on-line dynamic wheel size measurement system installed at the Gedian station, Wuhan, China, and take the No. 0941084 truck passing at 13:10 on February 16, 2018 as measured object. A total of 218 wheel-sets, wherein 6 locomotive wheels fare removed or a total of 212, are measured using CII, CNII, and CNIIDC. The curves of tread wear and flange thickness based on different methods are shown in Fig. 13.

 figure: Fig. 13

Fig. 13 Curves of online dynamic wheel size measurement results. (a) Curves of tread wear measurement results; (b) Curves of flange thickness measurement results.

Download Full Size | PDF

As shown in Fig. 13(a), the tread wear curves of CNIIDC and CII are close to each other, while the CNII shows a large deviation. From Fig. 13(b), the results of CNIIDC are close to CII in the thickness result curve, and the measurement results of CNII deviate significantly to CII. The results from CII are taken as the true value and the statistics of online dynamic measurement of wheel parameters results are performed based on the CNIIDC and CNII methods. The RMS error in tread wear between CNIIDC and CII is 0.27 mm, while the RMS error of CNII results is 1.14 mm. Meanwhile, the RMS error of CNIIDC in flange thickness is 0.40 mm, while the RMS error of CNII is 2.78 mm. In summary, the proposed method applied in online wheel size measurement system can achieve the accuracy of CII, in which the error meets the requirements of most railway inspection departments.

4. Conclusion

In this paper, a new line-structured light vision sensor calibration method by correcting image deviation is proposed. This method can reduce location deviation of feature points properly, which is suitable for on-site high-accuracy calibration under complex lighting conditions. The mathematical solution for stripe point location and the location uncertainty of target feature points and stripe points is established. Given the rigid homography mapping between target and image planes and the distance of stripe points back-projected to space with target is minimum, the location deviation of each point is solved through large scale nonlinear multi-step optimization. After compensation, the high-accuracy light plane equation is solved through traditional plane target calibration method [11]. Compared with traditional plane target based calibration method, the proposed approach can achieve the same measurement accuracy as the planar target calibration method under ideal imaging conditions. Simultaneously, the proposed method is applied to an online dynamic wheel size measurement system, and the results show that this method can improve calibration accuracy of structured-light sensor and enhance measurement accuracy significantly, which has very important engineering value.

Funding

National Natural Science Foundation of China (NSFC) (51575033), Co-Construction Project of Scientific Research and Postgraduate Cultivaltion -Achievements Transformation and Industrialization Program -On-line dynamic detecton system of train pantograph-catenary operating conditions provided by the Beijing Municipal Education Commission.

References

1. B. Q. Shi and J. Liang, “Guide to quickly build high-quality three-dimensional models with a structured light range scanner,” Appl. Opt. 55(36), 10158–10169 (2016). [CrossRef]   [PubMed]  

2. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3D imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]   [PubMed]  

3. C. K. Sun, L. Tao, P. Wang, and H. Li, “A 3D acquisition system combination of structured-light scanning and shape from silhouette,” Chin. Opt. Lett. 4(5), 282–284 (2006).

4. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range structured light system calibration,” Appl. Opt. 55(33), 9563–9572 (2016). [CrossRef]   [PubMed]  

5. Y. Caulier, “Inspection of complex surfaces by means of structured light patterns,” Opt. Express 18(7), 6642–6660 (2010). [CrossRef]   [PubMed]  

6. B. Li and S. Zhang, “Flexible calibration method for microscopic structured light system using telecentric lens,” Opt. Express 23(20), 25795–25803 (2015). [CrossRef]   [PubMed]  

7. Y. Zhu, Y. Gu, Y. Jin, and C. Zhai, “Flexible calibration method for an inner surface detector based on circle structured light,” Appl. Opt. 55(5), 1034–1039 (2016). [CrossRef]   [PubMed]  

8. Y. Gao, S. Y. Shao, and Q. B. Feng, “A new method for dynamically measuring diameters of train wheels using line structured light visual sensor,” in Symposium on Photonics and Optoelectronics (SOPO,2012), pp.1–4. [CrossRef]  

9. H. Meng, X. R. Yang, W. G. Lv, and S. Y. Cheng, “The design of an abrasion detection system for monorail train pantograph sider,” Railway Standard Design. 61(8), 151–154 (2017).

10. Z. Liu, J. H. Sun, H. Wang, and G. J. Zhang, “Simple and fast rail wear measurement method based on structured light,” Opt. Lasers Eng. 49(11), 1343–1351 (2011). [CrossRef]  

11. G. J. Zhang, Z. Liu, and Z. Z. Wei, “Novel calibration method for a multi-sensor visual measurement system based on structured light,” Opt. Eng. 49(4), 258 (2010). [CrossRef]  

12. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (2003). [CrossRef]  

13. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

14. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). [CrossRef]   [PubMed]  

15. Z. Liu, Q. Wu, S. Wu, and X. Pan, “Flexible and accurate camera calibration using grid spherical images,” Opt. Express 25(13), 15269–15285 (2017). [CrossRef]   [PubMed]  

16. J. Y. Liu, Y. Li, and S. Chen, “Robust camera calibration by optimal localization of spatial control points,” IEEE Trans. Instrum. Meas. 63(12), 3076–3087 (2014). [CrossRef]  

17. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016). [CrossRef]   [PubMed]  

18. R. Dewar, “Self-generated targets for spatial calibration of structured light optical sectioning sensors with respect to an external coordinate system,” in Proceedings of Robots and Vision’88 Conference, pp. 5–13.

19. D. Q. Huynh, R. A. Owens, and P. E. Hartmann, “Calibrating a structured light stripe system: a novel approach,” Int. J. Comput. Vis. 33(1), 73–86 (1999). [CrossRef]  

20. Z. Z. Wei, L. J. Cao, and G. J. Zhang, “A novel 1D target-based calibration method with unknown orientation for structured light vision sensor,” Opt. Laser Technol. 42(4), 570–574 (2010). [CrossRef]  

21. Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based on a single ball target,” Opt. Lasers Eng. 69(6), 20–28 (2015). [CrossRef]  

22. Z. Liu, X. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 29896–29911 (2015). [CrossRef]   [PubMed]  

23. L. Qi, Y. Zhang, X. Zhang, S. Wang, and F. Xie, “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger’s algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]   [PubMed]  

24. C. Steger, “An unbiased detector of curvilinear structure,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]  

25. O. Chum, J. Matas, and S. Obdrzalek, “Enhancing RANSAC by generaized model optimization,” in Proceedings of Asian Conference on Computer Vision (ACCV,2004), pp. 812–817.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Calibration schematic of structured light vision sensor.
Fig. 2
Fig. 2 Location uncertainty diagram of target feature points and stripe points. (a) Line-structured light calibration image; (b) Target feature point; (c) Stripe point; (d) Grayscale distribution of target feature point; (e) Grayscale distribution of stripe point; (f) Location uncertainty of target feature point; (g) Location uncertainty of stripe point.
Fig. 3
Fig. 3 Analysis of calibration accuracy under different target feature points’ noise levels. (a) RMS error of a with different target feature points’ noise; (b) RMS error of b with different target feature points’ noise; (c) RMS error of c with different target feature points’ noise; (d) RMS error of d with different target feature points’ noise.
Fig. 4
Fig. 4 Analysis of calibration accuracy under different stripe points’ noise levels. (a) RMS error of a with different stripe points’ noise; (b) RMS error of b with different stripe points’ noise; (c) RMS error of c with different stripe points’ noise; (d) RMS error of d with different stripe points’ noise.
Fig. 5
Fig. 5 On-site structured light vision sensor and calibration target. (a) Line-structured light vision sensor applied in wheel size measurement system installed in outdoor railway. (b) Metal plane target embedded with LED.
Fig. 6
Fig. 6 Ideal images captured in shelter.
Fig. 7
Fig. 7 Structured light calibration image captured outdoors under complex lighting conditions. Three kinds of situations are present: the first row is the uneven width of stripe, the second row is the low brightness of the target feature points, and the third row is the target reflective condition.
Fig. 8
Fig. 8 Feature points location deviation correction results under complex lighting conditions. Green cross points are pre-corrected feature points, red cross points are corrected ones. Green asterisk points are pre-corrected stripe points, red assterisk points are corrected ones. (a) Correction results of target feature points under reflective conditions; (b) Correction results of target and stripe feature points under strong reflective and over-exposure conditions; (c) Correction results of target and stripe feature points under stripe and feature points intersect each other; (d) Correction results of target and stripe feature points under low and uneven brightness; (e) Correction results of target and stripe feature points under stripe over-exposure and stray light interference; (f) Correction results of target and stripe feature points under high brightness background.
Fig. 9
Fig. 9 Two-cylinder target used for measurement accuracy evaluation.
Fig. 10
Fig. 10 Distribution of parameters with four calibration methods. (a) The distribution of light plane parameter a with four calibration methods; (b) The distribution of light plane parameter b with the four calibration methods; (c) The distribution of light plane parameter c with four calibration methods; (d) The distribution of light plane parameter d with four calibration methods.
Fig. 11
Fig. 11 Online dynamic wheel size measurement in outdoor railway line.
Fig. 12
Fig. 12 Wheel cross profiles with different calibration methods. (a) Wheel cross profile of freight train. (b) Wheel cross profile of passenger train.
Fig. 13
Fig. 13 Curves of online dynamic wheel size measurement results. (a) Curves of tread wear measurement results; (b) Curves of flange thickness measurement results.

Tables (5)

Tables Icon

Table 1 Deviation of Target Feature Points Before and After Correction

Tables Icon

Table 2 Deviation of Stripe Points Before and After Correction

Tables Icon

Table 3 Line-Structured Light Vision Sensor Parameters Calibrated by Different Methods

Tables Icon

Table 4 Distance Result of Calibration Precision via Different Methods (mm)

Tables Icon

Table 5 Cylinder Radius Result of Calibration Accuracy via Different Methods (mm)

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

f[(t n u + u 0 ),(t n v + v 0 )]=g( u 0 , v 0 )+t n u g u ( u 0 , v 0 )+t n v g v ( u 0 , v 0 ) + 1 2 t 2 n u 2 g uu ( u 0 , v 0 )+ t 2 n u n v g uv ( u 0 , v 0 )+ 1 2 t 2 n v 2 g vv ( u 0 , v 0 ).
t= n u g u + n v g v n u 2 g uu +2 n u n v g uv + n v 2 g vv .
F ˙ h ^ ( c 0 , r 0 )= F ˙ I ^ ( c 0 , r 0 )+ F ˙ N ^ ( c 0 , r 0 )=0,
F ˙ h ^ ( c 0 ,r=0 )= F ˙ I ^ ( 0,0 )+ F ˙ N ^ ( 0,0 )+( F ¨ I ^ ( c 0 , r 0 )( 0,0 )+ F ¨ N ^ ( 0,0 )) c 0 +O( c 0 )=0.
c 0 F ˙ N ^ ( 0,0 )/ F ¨ I ^ ( 0,0 ).
σ ^ N 2 = σ N 2 8π σ g 4 .
σ c 0 2 = σ ^ N 2 F ¨ I ^ ( 0,0 ) 2 .
F ¨ I ^ ( 0,0 )= M σ w ( σ g 2 + σ w 2 ) 3/2 .
σ c 0 2 = ( σ w 2 + σ g 2 ) 3 8π σ g 4 σ w 2 ( σ N 2 M 2 ).
ρ p u =K[ r 1 r 2 t ][ x y 1 ]= H 3×3 [ x y 1 ],
u d = u u +( u u u 0 )( k 1 r 2 + k 2 r 4 ) v d = v u +( v u v 0 )( k 1 r 2 + k 2 r 4 ) .
p n =[ u n v n 1 ]=[ u d +Δu v d +Δu 1 ].
e 1 =min{ i=1 M j=1 N D( p ij , p n( ij) ) +| i=1 M j=1 N p ij i=1 M j=1 N p n( ij) | },
e 2 =min{ i=1 M j=1 N Dist( Q j , Q ˜ ij ) +| j=1 N Q j i=1 M j=1 N Q ˜ ij | }.
e 3 =min{ i=1 N { j=1 K | (C R h ( Q j , Q (j+1) ; Q (j+2) , Q (j+3) ) C R h ( p u(ij) , p u(i(j+1)) ; p u(i(j+2)) , p u(i(j+3)) ) | + j=1 L | (C R v ( Q j , Q (j+1) ; Q (j+2) , Q (j+3) ) C R v ( p u(ij) , p u(i(j+1)) ; p u(i(j+2)) , p u(i(j+3)) ) | } },
e 4 =min( i=1 M j=1 N G( Q ^ ij , F i ) ),
e 5 =min( i=1 M j=1 N Dist( Q ^ ij , c ˜ i ) ).
E(a)= e 1 + e 2 + e 3 (1) + e 4 + e 5 (2) (3) ,
{ U Δ u ij Δ u ij U Δ u ij U Δ v ij Δ v ij U Δ v ij U Δ u ij l Δ u ik l U Δ u ij l U Δ v ij l Δ v ik l U Δ v ij l .
Q ik l =s H ˜ i 1 p ˜ ik l ,
Q ^ ik l = Q ik l R ˜ i + t ˜ i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.