Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Correction of geometric and refractive image distortions in optical coherence tomography applying Fermat’s principle

Open Access Open Access

Abstract

We describe a methodology for quantitative image correction in OCT which includes procedures for correction of nonlinear axial scanning and non-telecentric scan patterns, as well as a novel approach for refraction correction in layered media based on Fermat’s principle. The residual spatial error obtained in layered media with a fan-beam hand-held probe was reduced from several hundred micrometers to near the diffraction and coherence-length limits.

©2002 Optical Society of America

Introduction

Optical coherence tomography (OCT) is a relatively new technology, which is capable of noninvasive micron-scale resolution imaging in living biological tissues. Recent OCT research has focused on developing instrumentation appropriate for imaging in clinical settings (e.g. in ophthalmology [1,2], dermatology [3] and gastroenterology [4]), on resolution improvements [1], real-time imaging [5], and on functional imaging such as in color Doppler OCT [6]. A major impediment to the use of OCT for quantitative morphological imaging is image distortions which may occur due to several mechanisms, including nonlinearities in the reference or sample scan mechanisms, non-telecentric (diverging or converging) scan geometries, and the refraction of probe light in the sample.

There are many applications of OCT for which accurate two-dimensional reconstruction of complex features in layered media is required. One application of imaging biometry is in the ophthalmic anterior segment [2], in which not only linear distances but also curvilinear surfaces, interfaces, and enclosed areas must be accurately obtained. In this application, ultrasound biomicroscopy (UBM) is a valuable tool for the diagnosis of appositional angle closure, which is a risk factor for progressive trabecular damage, elevated intraocular pressure and acute angle-closure glaucoma [7]. Since OCT is non-contact, imaging the anterior chamber (AC) angle with OCT [2] greatly improves patient comfort, and may significantly reduce examination time, thereby enabling screening of populations at risk. An additional advantage is the substantial resolution improvement from ∼50 to <15 μm, which is achieved with OCT.

Current-generation real-time OCT systems employ depth-priority scanning with the axial scan implemented using a rapid-scan optical delay (RSOD) in the reference arm [5,8]. The rapid axial scan is readily implemented using resonant scanners, however the resulting sinusoidally varying delay axially distorts the resulting OCT images. By contrast, the lateral scan is typically slow (4-32 Hz) and may be implemented using closed-loop galvanometric optical scanners without causing image distortions. The use of non-telecentric scan patterns is often necessitated by non-planar sample configurations, e.g. imaging the convex surface of the cornea or the concave surface of a hollow organ or tract. Non-contact imaging, one of the primary advantages of OCT, also leads to significant image distortions due to refraction of the probe beam at the interface between air and smooth surfaces such as the cornea or liquid accumulations in internal organs. Image distortions due to refraction may also occur at internal smooth tissue index boundaries such as the cornea-aqueous interface in the eye. This paper describes algorithms for numerical correction of the three primary unavoidable image distortions in real-time OCT: nonlinear axial scanning, non-telecentric lateral scanning, and refraction at smooth boundaries. The first two of these algorithms are demonstrated to be corrected in real-time prior to image display, while the third requires interactive user definition of smooth tissue boundaries and was thus implemented in post-processing.

Methods

Image transformations

In principle, image transformations can be implemented using either forward or backward mapping approaches. In the raw (uncorrected) OCT image, we define x’ and y’ to denote the coordinates across and along A-scans (single depth scans). x and y are the corresponding coordinates in the target (corrected) image (Fig. 1). Pixel positions are abbreviated as P’ = (x’,y’) and P= (x,y). A forward mapping approach (P = f(P’)) calculates the target position for a given raw image data point. Since the target position P will most likely fall between target pixels, algorithms must be applied to distribute the value of the raw image pixel onto several neighboring pixels. Since the density of this distribution is in general non-homogeneous, the brightness of the target image must be renormalized, a significant computational expense. A backward mapping approach (P’ = F(P)) avoids this disadvantage by mapping each target pixel to a location in the raw image, then using simple interpolations between surrounding pixels in the raw image to obtain the target pixel value. Furthermore, the number of transformations is at the optimum of one per target pixel, another advantage compared with forward mapping. If the same backward transformation is applied to all images, it can be implemented with lookup table to achieve real-time imaging [4,9]. Transformations can be cascaded, using the output of one transformation as the input for the next.

 figure: Fig. 1.

Fig. 1. A) Correction of nonlinear axial scanning B) Coordinate system of the data acquired (raw image). C) Target coordinate system in a homogeneous medium; D) Refractive interfaces in the sample alter ray paths. Abbreviations: A: full amplitude of depth scan, η: axial duty cycle of acquisition, D: distance on axis to image of scanning pivot (ISP), w: width, d: depth of acquired image, primed coordinates in raw data, unprimed in target image, P and P’: corresponding points in target and source image, φ: scan angle, φmax: extreme scan angle in the vertical center of the image, Lh: distance from P or P1 to the ISP, L1, L2: partial distances in inhomogeneous sample with the indices of refraction n1 and n2, L = Lh+L1+L2.

Download Full Size | PDF

Nonlinear axial scanning

The motion of the resonant scanner can be well described as sinusoidal, simplifying the backward transformation. In a standard high-speed OCT system the carrier is generated by the scan mirror motion [5,8]. Therefore, the nonlinear motion also limits the usable duty cycle η if no tracking filters are employed. Within this duty cycle, the scan depth d is acquired. With this knowledge, the forward transformation for the axial scan coordinate y can be written as (Fig. 1A)

y=fres,y(x',y')=Asin(πηy'd)

The total scan depth A can be calculated from

fres,y(x',d2)=d2

as

A=d2sin1(πη2).

Therefore, the backward transformation can be written as

x'=Fres,xxy=x
y'=Fres,yxy=dπηarcsin(yA),

since the horizontal scan is linear. The transformations to correct non-telecentric scanning and refraction can use the target image of this transformation as the raw image or the transformations can be analytically cascaded.

Non-telecentric scanning

To correct for geometric image distortions due to non-telecentric scanning, the coordinate systems for the raw and target images are defined in terms of the scan geometry (Fig. 1B, C). We assume the OCT image is formed using a sample arm probe that incorporates a rotating mirror scanner [2]. The origin for both systems is in the center of the image. x’ is linear with acquisition time and thereby with the lateral scanning mirror movement. The center of the field of view (FOV, having width w and depth d) is a distance D from the scanning pivot (SP) or an image of the sample pivot (ISP), depending upon the internal details of the sample arm probe. For positive or negative D, scans diverge or converge, respectively. For telecentric scans D approaches infinity, but for any real system, this can always be approximated by a large D.

The target pixel position P can also be defined in polar coordinates (φ, L), with the scanning angle φ in the FOV and the distance L from ISP to P (Fig. 1B, C). For a homogeneous sample without refraction (denoted by subscript h), φ and L are given by

φhxy=arctan(x/(Dy)),

and

Lhxy=Dx2+(Dy)2.

The scanning angle to reach the extreme of the FOV at the center plane is given by φmax = φh(w/2,0). In the rectangular array of acquired data, the scan angle φ’ in the FOV is linear with position x’: φ'(x',y') = 2xmax/w = x'/D, while the distance between ISP and P' is L'(x',y') = D-y'. Since φ= φ’, L = L’, and φmax = φ'max, the complete backward transformations are given by:

x'=Fxhxy=arctan(xDy)D
y'=Fyhxy=Dx2+(Dy)2

Refraction correction

In order to correct for refraction, the refractive indices of sample layers must be known and the interfaces between them must be identified (Fig. 1D). Then, fundamental principles can be applied to correct for beam deflection and to transform optical into physical pathlengths. In the following, we assume k layers of different media in the image area, having indices of refraction n1 to nk, with the probe beam entering the sample from the top of the image in air (n 0= 1). The interfaces lk(x) between layers of differing index are assumed to be smooth functions.

A forward transformation for refraction correction could use Snell’s law to calculate the target pixel for each raw data pixel, by propagating the incident beam sequentially through the sample layers and keeping track of optical path length until the correct distance is reached. However, the raw image is distorted by the scan geometry and refraction on previous interfaces. Therefore, defining the normal on the interface and the incident angle to apply Snell’s law becomes very complicated in raw image coordinates. For the backward transformation, the interfaces can be defined distortion-free in target coordinates, avoiding this difficulty. However, for the latter transformation, Snell’s law cannot be applied since the route of the beam through the sample is not known a priori. A solution can be obtained, however, by applying Fermat’s principle, which states that light rays will propagate such that the optical path between source and target locations is minimized. Assuming the sample beam passes though the points Pi at the interfaces of the layers to reach P(Fig. 1D), the total optical pathlength is the sum of the distance from ISP to P1(Eq. (7)) and the optical path between subsequent Pi,‘s to P:

L(P1,..,Pk,P)=Lh(P1)+i=1k1niPiPi+1+nkPkP.

By varying the location of Pi along the interfaces, a minimum of L can be found, satisfying Fermat’s principle. Assuming the paths of the different probe beams do not cross within the FOV, a unique solution exists for the Pi. After iteratively solving for the values of Pi, the complete back transformation can be written as

x'=Fx(P1,...,Pk,P)=Fxh(P1),
y'=Fy(P1,...,Pk,P)=DL(P1,...,Pk,P).
 figure: Fig. 2.

Fig. 2. Versatile lateral scanning hand held probe with a divergent scan used in these experiments. Abbreviations: CL: collimation lens, SM: scanning mirror, RL: relay lens, OL: objective lens, CG: calibration grid with a line spacing of 317 μm, translated stepwise axially, fxx: focal length of lens XX.

Download Full Size | PDF

Experimental Setup

The high speed OCT system employed in this letter was based on a previously published design [5], employing a Fourier-domain rapid-scanning optical delay (RSOD) in the reference arm incorporating a resonant scanner oscillating at 2 kHz. In the sample arm, a versatile lateral scanning hand held probe was used, having a divergent scan and a focal depth of 11 mm [2] (Fig. 2). Two relay lenses imaged the scanning mirror to the aperture of the final objective lens. Both relay lenses were achromats to minimize optical aberrations. This scanner was designed for small focal spot size, compact design, and sufficient working distance to allow for non-contact imaging of the skin or anterior segment of the eye. The image size of 3.77 mm width and 4 mm depth was determined by imaging a metal grid with 317 μm line spacing (LCI Supplies, West Chester, PA), translated axially in steps with a precision linear stage (Fig. 2 right). A broadband (λ=1.32 μm, Δλ=68 nm, FWHM), high power (22 mW) semiconductor optical amplifier light source illuminated the interferometer. For improved SNR, we used an efficient, optical circulator-based interferometer with balanced detection [10]. The axial duty cycle η of the acquisition was 66.7 %. We imaged a test object and the temporal anterior chamber angle in the eye of a healthy volunteer. All images were acquired at 8 frames/s to reduce motion artifacts.

Image processing

The images were preprocessed online in real-time to correct the nonlinear axial scanning [Eq. (5)] and non-telecentric scanning [Eqs. (8),(9)]. The Fermat’s principle algorithm derived above was implemented using MatLab 5.2 and applied offline to the acquired images in the following steps: Since the geometrical distortion due to scan divergence was already removed using the algorithm described above, the first interface was distortion free. Therefore, the first refraction interface could be semi-automatically defined by user input of 3 to 4 points on the interface and refined by active contours [11], a popular segmentation algorithm. Distortion due to refraction at the first interface was then corrected using the Fermat’s principle algorithm. After the correction of refraction at the first interface, the second interface was distortion-free and could be defined in the same manner as the first boundary. The Fermat’s principle algorithm was then applied again for refraction at the second interface.

The calculation of the Fermat’s principle algorithm in its basic form requires a numerical iteration for each target pixel. The computation time was greatly reduced by dividing the target image into tiles. In this case, the iteration was only calculated for the corner points of the tiles. For all the target points within the tiles, a bilinear interpolation was employed to obtain the intermediate transformations. Therefore, for tile sizes sufficiently large, the effective computational expense to calculate the transformation for a target pixel was reduced by a factor of ∼100. Since this is an approximation, small errors may occur, especially if a discontinuity (a strong refractive index change) crosses the tile. We chose a tile size of 10x10 pixel, limiting the error to less then 0.8 pixel.

Results

Calibration

First, we confirmed the precise distortion correction of the resonant scanner movement. The calibration grid was translated axially in steps of 100 μm, and the position of the central grid wire was determined by the intensity centroid. The maximum position error decreased through the transformation from 176 μm to 4.6 μm (Fig. 3 A).

Next, the calibration grid was used to validate the accuracy of the geometric image transformation for the hand-held probe. An overlay of images acquired at different axial positions (already corrected for nonlinear axial scanning) of the grid is shown in Fig. 3B. The steps between axial positions were 500 μm, except the top and bottom steps which were 400 and 300 μm, respectively. The region of best focus differed in the lateral direction, being closer to the top on the sides of the FOV. After calculating the centroids of the calibration points, we used the best fit to obtain the size of the FOV and the distance D between the image of the scanning pivot and the center of the FOV. The FOV had a width w of 3.77 mm and a depth d of 4.0 mm. D was estimated as 11.4 mm, slightly larger than the focal length of the objective lens. Fig. 3C displays the residual error for each grid point as a vector between the true and estimated location. The vectors for the corrected image are magnified ten times for visualization. The maximum error vector length in the FOV decreased from 304 μm in the raw data to 17 μm following geometric transformation.

 figure: Fig. 3.

Fig. 3. A) Distortion correction of nonlinear axial scanning. B) Overlay of calibration images acquired at different axial positions, each image averaged over 50 frames, corrected for nonlinear axial scanning. C) Residual error vectors in air (thin line), without correction (thick line), and with aberration correction (length x10).

Download Full Size | PDF

Refraction phantom

As a sample test object for correction of multiple distortions, we imaged a drop of Intralipid© (n1=1.33) on a glass microscope cover slip (n2=1.51, measured to be 977(10) μm thick). Fig. 4A shows three obvious distortions: (1) The boundaries of the flat cover slip appear convex, due to the geometric distortion of the diverging scanner. (2) The upper and lower surfaces of the cover slip under the drop appear concave due to refraction by the drop. (3) The cover slip appears thicker than its physical thickness due to refraction by the glass. The image was acquired slightly displaced from the apex of the droplet to avoid saturation of the detection electronics by the large specular reflection which occurs there. Following image correction with the algorithms described above, both cover slip surfaces were rendered flat with a maximum deviation from a line of 10 and 13 μm respectively, and the thickness of the cover slip was measured from the image as 978(4) μm (Fig. 4B). Due to the highly positive curvature at the edges of the drop, two wedge-shaped areas were visible below, where the data was spread out by the algorithm. This effect may be relevant in ophthalmic imaging of patients with contact lenses.

Anterior chamber angle of the eye

After validating the distortion correction techniques using the test object, we applied the techniques to correct images of the anterior chamber angle of the eye of a healthy volunteer. The width of the anterior chamber angle and the configuration of the peripheral iris, as represented by the area between the inner corneo-scleral wall and the iris, are important in the diagnosis of primary angle closure glaucoma [7]. Visible in Fig. 5, the appearance of this angle changes substantially with processing. The correction of the nonlinear axial scan (Fig. 5B) expanded the center and compressed top and bottom of the image, while the non-telecentric correction compressed the top and expanded the bottom laterally (Fig. 5C). Since the corneal surface was significantly tilted, refraction played an important role (Fig. 5D).

Discussion

The focus of development in OCT research is shifting from simply acquiring an image to developing images that can be used for diagnosis. As shown here, image processing can significantly reduce the amount of geometric errors within OCT images, allowing for accurate measurements of distances, angles, and areas. The main sources of distortion requiring correction include nonlinear axial scanning, non-telecentric scanning and refraction.

 figure: Fig. 4.

Fig. 4. A) Uncorrected image of an Intralipid© drop on a coverslip, B) corrected image

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. OCT images of the temporal anterior chamber angle of a human eye, imaged in vivo at 8 fps, A) raw image, B) corrected for nonlinear axial scanning (duty cycle 66.7 %) C) additionally corrected for divergent scan geometry and D) final image, corrected for refraction at the air-cornea interface (ncornea = 1.38) and at the endothelium-aqueous boundary (naqueous = 1.33)

Download Full Size | PDF

In the general case, refraction is a three-dimensional problem. Since conventional OCT acquires images in two dimensions, accurate correction of refraction can only be achieved if no significant beam deflection occurs in the third dimension. In imaging the anterior segment, positioning of the B-scan normal to the corneal surface can be readily achieved by monitoring the large specular reflection, which occurs at the apex. In the future, two-dimensional lateral scanning could be used to allow three-dimensional refraction correction.

While we have already implemented correction of geometric distortion in real-time, refraction correction was calculated offline. This was due to both the need for interactive designation of refractive surface contours, as well as computation time in our preliminary implementation. Our MatLab implementation allowed the correction of the images within 7-8 sec on a 1 GHz Pentium III computer. Implementing the algorithm in C++ will reduce processing time by at least an order of magnitude. In combination with automatic interface detection, this should allow online correction during the acquisition.

Spherical aberration can also influence image distortions. We demonstrated with exact ray tracing simulations that such aberrations can substantially deviate the ISP when higher scan angles or different probe optics are employed than those used for the data in Fig. 2. Furthermore, under this condition the scan angle in the image is no longer linear with the angle of the scanner. The aberrations in our system proved to be small enough to be neglected, but in cases where the aberrations are significant, these distortions can also be corrected using a backward transformation.

The refraction correction described here is highly sensitive to position noise of the detected interface, because the slope varies more with noise than the position. This complicates application of this technique to rough surfaces and may limit its use to smooth interfaces.

Conclusion

In conclusion, we have shown that major sources of image distortions in OCT, refraction, scanning geometries and nonlinearities, can be corrected from several hundred micrometers down to 10-20 μm, reaching the diffraction and coherence length limits. With this accuracy, OCT has great potential for micrometer-scale biometry as a fast, noninvasive and quantitative imaging device.

Acknowledgements

The authors would like to thank Jason Goldsmith, MD for medical discussions and the National Institutes of Health grant R24EY13015NIH for the support of this research.

References and links

1. W. Drexler, U. Morgner, R. K. Ghanta, F. X. Kartner, J. S. Schuman, and J. G. Fujimoto, “Ultrahigh-resolution ophthalmic optical coherence tomography,” Nat. Med. 7, 502–507 (2001). [CrossRef]   [PubMed]  

2. S. Radhakrishnan, A. M. Rollins, J. E. Roth, S. Yazdanfar, V. Westphal, D. S. Bardenstein, and J. A. Izatt, “Real-time optical coherence tomography of the anterior segment at 1310 nm,” Arch. Ophthalmol. 119, 1179–1185 (2001). [PubMed]  

3. J. Welzel, “Optical coherence tomography in dermatology: a review,” Skin Res. Technol. 7, 1–9 (2001). [CrossRef]   [PubMed]  

4. A. M. Rollins, R. Ung-arunyawee, A. Chak, R. C. K. Wong, K. Kobayashi, M. V. Sivak Jr., and J. A. Izatt, “Real-time in vivo imaging of human gastrointestinal ultrastructure by use of endoscopic optical coherence tomography with a novel efficient interferometer design,” Opt. Lett. 24, 1358–1360 (1999). [CrossRef]  

5. A. M. Rollins, M. D. Kulkarni, S. Yazdanfar, R. Ung-arunyawee, and J. A. Izatt, “In vivo video rate optical coherence tomography,” Opt. Express 3, 219–229 (1998). http://www.opticsexpress.org/abstract.cfm?URI=OPEX-3-6-219 [CrossRef]   [PubMed]  

6. S. Yazdanfar, A. M. Rollins, and J. A. Izatt, “Imaging and velocimetry of the human retinal circulation with color Doppler optical coherence tomography,” Opt. Lett. 25, 1448–1450 (2000). [CrossRef]  

7. H. Ishikawa, K. Esaki, J. M. Liebmann, Y. Uji, and R. Ritch, “Ultrasound biomicroscopy dark room provocative testing: A quantitative method for estimating anterior chamber angle width,” Jpn. J. Ophthalmol. 43, 526–534 (1999). [CrossRef]  

8. G. J. Tearney, B. E. Bouma, and J. G. Fujimoto, “High Speed Phase- and Group-Delay Scanning with a Grating-Based Phase Control Delay Line,” Opt. Lett. 22, 1811 (1997). [CrossRef]  

9. P. Mattson, D. Kim, and Y. Kim, “Generalized image warping using enhanced lookup tables,” Int. J. Imaging Syst. Technol. 9, 475–483 (1998). [CrossRef]  

10. A. M. Rollins and J. A. Izatt, “Optimal interferometer designs for optical coherence tomography,” Opt. Lett. 24, 1484–1486 (1999). [CrossRef]  

11. D. J. Williams and M. Shah, “A fast algorithm for active contours and curvature estimation,” Cvgip-Image Understanding 55, 14–26 (1992). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. A) Correction of nonlinear axial scanning B) Coordinate system of the data acquired (raw image). C) Target coordinate system in a homogeneous medium; D) Refractive interfaces in the sample alter ray paths. Abbreviations: A: full amplitude of depth scan, η: axial duty cycle of acquisition, D: distance on axis to image of scanning pivot (ISP), w: width, d: depth of acquired image, primed coordinates in raw data, unprimed in target image, P and P’: corresponding points in target and source image, φ: scan angle, φmax : extreme scan angle in the vertical center of the image, Lh : distance from P or P1 to the ISP, L1 , L2 : partial distances in inhomogeneous sample with the indices of refraction n1 and n2 , L = Lh +L1 +L2 .
Fig. 2.
Fig. 2. Versatile lateral scanning hand held probe with a divergent scan used in these experiments. Abbreviations: CL: collimation lens, SM: scanning mirror, RL: relay lens, OL: objective lens, CG: calibration grid with a line spacing of 317 μm, translated stepwise axially, fxx: focal length of lens XX.
Fig. 3.
Fig. 3. A) Distortion correction of nonlinear axial scanning. B) Overlay of calibration images acquired at different axial positions, each image averaged over 50 frames, corrected for nonlinear axial scanning. C) Residual error vectors in air (thin line), without correction (thick line), and with aberration correction (length x10).
Fig. 4.
Fig. 4. A) Uncorrected image of an Intralipid© drop on a coverslip, B) corrected image
Fig. 5.
Fig. 5. OCT images of the temporal anterior chamber angle of a human eye, imaged in vivo at 8 fps, A) raw image, B) corrected for nonlinear axial scanning (duty cycle 66.7 %) C) additionally corrected for divergent scan geometry and D) final image, corrected for refraction at the air-cornea interface (ncornea = 1.38) and at the endothelium-aqueous boundary (naqueous = 1.33)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

y = f res , y ( x ' , y ' ) = A sin ( πη y ' d )
f res , y ( x ' , d 2 ) = d 2
A = d 2 sin 1 ( πη 2 ) .
x ' = F res , x x y = x
y ' = F res , y x y = d πη arcsin ( y A ) ,
φ h x y = arctan ( x / ( D y ) ) ,
L h x y = D x 2 + ( D y ) 2 .
x ' = F x h x y = arctan ( x D y ) D
y ' = F y h x y = D x 2 + ( D y ) 2
L ( P 1 , . . , P k , P ) = L h ( P 1 ) + i = 1 k 1 n i P i P i + 1 + n k P k P .
x ' = F x ( P 1 , . . . , P k , P ) = F xh ( P 1 ) ,
y' = F y ( P 1 , . . . , P k , P ) = D L ( P 1 , . . . , P k , P ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.