## Abstract

Aberration-corrected imaging of human photoreceptor cells, whether hardware or software based, presently requires a complex and expensive setup. Here we use a simple and inexpensive off-axis full-field time-domain optical coherence tomography (OCT) approach to acquire volumetric data of an *in vivo* human retina. Full volumetric data are recorded in 1.3 s. After computationally correcting for aberrations, single photoreceptor cells were visualized. In addition, the numerical correction of ametropia is demonstrated. Our implementation of full-field optical coherence tomography combines a low technical complexity with the possibility for computational image correction.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

The quality of high-resolution imaging of the human retina is strongly influenced by imperfections in the optics of the eye, which cause aberrations and degrade the lateral resolution. While low-order aberrations, e.g., defocus, are easy to correct, high-order aberrations require greater effort. Optical elements for reshaping wavefronts, such as spatial light modulators or deformable mirrors, have to be implemented for correction [1–6]. These components, however, strongly increase the system complexity and cost.

In retinal imaging with optical coherence tomography (OCT), aberrations, in principle, can be corrected in post-processing, since the phase of the signal is acquired, in addition to the amplitude. Recently, multiple algorithms for computational aberration correction have been presented [7–11]. However, all require a fixed phase relation between the lateral points in a certain field of view, not given by current commercial flying-spot Fourier-domain OCT systems. Since the acquisition of the axial dimension is parallelized while the lateral dimensions are acquired sequentially in these systems, the lateral axes are subject to motion artifacts. For retinal imaging, this results in the lateral phase being dominated by unavoidable eye motion, rather than the microstructure of the sample and the path of the light through the imaging optics.

To overcome this problem and enable computational aberration correction in post-processing, multiple setups have been demonstrated to accelerate lateral detection. In Fourier-domain line-field OCT [11] and Fourier-domain full-field OCT [9], this was accomplished by parallel detection of either one or two lateral dimensions. With time-domain OCT, aberrations were successfully corrected in *in vivo* retinal images acquired with fast flying-spot en-face OCT [6] or line-field OCT systems [12]. However, all approaches presented thus far required great hardware efforts.

Here we demonstrate a simple and inexpensive OCT system based on our recently introduced off-axis full-field time-domain OCT (OA-FF-TD-OCT) [13]. It acquires an entire en-face plane with a single exposure and, thus, is insensitive to lateral sample motion, allowing for computationally aberration-corrected *in vivo* imaging of the human retina.

The setup is based on an open Mach–Zehnder interferometer (Fig. 1). The free-space emitting superluminescent diode (SLD) (SLD-340-UHP-840, Superlum Diodes Ltd., Cork, Ireland) emits spatially coherent light with a spectral bandwidth of 25 nm around a central wavelength of 840 nm, corresponding to an axial resolution of 12 μm in air. Its light is collimated and divided into a sample and a reference arm by ${\mathrm{BS}}_{1}$. In the sample arm, light is focused by an achromatic lens into the focal plane in front of the eye. It is then collimated by the eye to illuminate a retinal area of $1.6\times 0.7\text{\hspace{0.17em}}{\mathrm{mm}}^{2}$ with a power of 4 mW. The illuminated field is then imaged onto a camera (acA2040-120 um, Basler, Ahrensburg, Germany) with a pixel spacing of 3.45 μm, a frame rate of 189 Hz, and an exposure time of 400 μs per frame. The numerical aperture (NA) is limited to 0.18 by an additional iris in the common Fourier plane of the imaging lens, ${\mathrm{L}}_{1}$, and the optics of the eye, ${\mathrm{L}}_{\mathrm{eye}}$.

In the reference arm, light is reflected by a mirror on a translational stage (U-521, Physik Instrumente, Karlsruhe, Germany), which is moved in 7.5 μm steps between the acquisition of consecutive frames. For 250 positions of the motor, images with $2048\times 912$ pixels were acquired.

After being reflected in ${\mathrm{BS}}_{3}$, the light passes a transmission grating with a grating constant of $g=70\text{\hspace{0.17em}}{\mathrm{mm}}^{-1}$, which diffracts the light in the first order. On the camera, this light interferes with the object light, introducing a carrier frequency ${k}_{t}$. The resulting lateral modulation has the same frequency as the grating given by

and serves as a carrier frequency. In the lateral Fourier domain, the carrier frequency shifts the image information in both cross-correlation terms away from the zero-frequency. As interference only occurs within the coherence length, these cross-correlation terms contain the desired amplitude and phase of light scattered in the en-face plane selected by the position of the reference mirror. The autocorrelation term, which contains the self-interference of all other signals (e.g., reflections from the setup and cornea, as well as light from other depths), and the DC term remains centered on the zero-frequency. With a limited object-information bandwidth and a sufficiently high carrier frequency, the image information of the selected en-face plane and the self-interference can be separated completely.The grating was required for an optimal computational aberration correction, as the broadband illumination used with the tilted reference beam in our previous setup [13] resulted in a wavelength-dependent frequency shift of the cross-correlation term in Fourier space. This does not degrade image quality, but tilts the en-face planes by the off-axis angle. When correcting aberrations computationally however, corrections for one wavelength result in additional wavefront errors for other wavelengths, preventing an optimal correction.

OCT data were reconstructed using an algorithm similar to our previous work [13]. For each depth, the raw image was 2D-Fourier transformed and, in each Fourier-transformed image, one cross-correlation term was selected, shifted to the zero frequency, and inversely Fourier transformed to reconstruct the en-face image. This demodulates the interference signal encoded on the carrier frequency. In contrast to Ref. [13], the separation of the cross-correlation terms from the autocorrelation term was not fully adapted to the increased NA and, thus, these terms significantly overlapped, as shown in Fig. 2(a). To suppress the autocorrelation term in the reconstructed images, we exploited the different behaviors of the two terms in our series of en-face images. Due to broadband illumination, the autocorrelation term does not change when lateral sample movements are smaller than the lateral resolution, as multiple speckle patterns that originate from depths separated by more than the coherence length superimpose incoherently on the camera. It is furthermore independent of reference arm position. In contrast, the cross-correlation term changes with every image as the reference is moved and, thus, the phase of the interference changes. Furthermore, for axial movements larger than the coherence length, the speckle pattern of a different en-face plane is recorded, resulting in an almost-random phase relation and speckle pattern of the cross-correlation terms of two different images. By high-pass filtering the series of raw images over the depth, we were able to suppress the autocorrelation term, while the cross-correlation terms remained unchanged, as shown in Fig. 2(b). This technique removes the need for massive lateral oversampling with at least 13 camera pixels for each pixel in the OCT image, as necessary in previous configurations [13]. Here we used approximately seven pixels to sample one pixel of the reconstructed image. With a different aperture geometry, this can be further reduced.

In coherent imaging, the formation of an aberrated image ${U}_{\mathrm{img}}$ can be described as the convolution of the wavefield in the object plane ${U}_{\mathrm{obj}}$ with a complex point spread function $P$:

In the frequency domain, this corresponds to the multiplication of the Fourier transform ${\tilde{u}}_{\mathrm{obj}}$ of the wavefield ${U}_{\mathrm{obj}}$ with a phase-only function $\tilde{p}$, which is given by the aberration-induced wavefront distortions and is the Fourier transform of the complex point spread function $P$:

To computationally correct the aberrations, the 2D Fourier transform of the complex image data ${\tilde{u}}_{\mathrm{img}}$ is multiplied by the complex-conjugated (thus, inverted) phase function ${\tilde{p}}^{*}$ [8]:

Determining this phase function is crucial for a successful aberration correction. Here we used the iterative optimization algorithm for aberration detection and correction that we previously introduced for full-field swept-source OCT [9]. The phase function was systematically varied and a metric that minimizes at the highest image sharpness was computed, until the best image quality was achieved. Here we used $\sum _{i,j}{I}_{i,j}^{0.75}$, where ${I}_{i,j}$ are the squared magnitudes of the pixels at position $i$, $j$ [14]. To limit the degrees of freedom for the optimization, the argument of the phase function was parametrized as Zernike polynomials up to the fourth radial degree, excluding the piston, tip, and tilt. This resulted in 12 degrees of freedom.

Optimization was performed by first using a simplex downhill algorithm [15] that was insensitive to local minima in this scenario, followed by a conjugate-gradient algorithm [16] to find the precise minimum with increase performance. To further increase the robustness, the NA was artificially decreased to 40% of the maximum NA and then increased step-wise to the original NA.

As aberrations were not constant over the field of view, each volume was divided into five sub-volumes, each consisting of $136\times 310$ voxels laterally and 250 voxels in depth. For each of these sub-volumes, we manually selected a region between $136\times 136\times 10$ and $196\times 196\times 8$ voxels, centered on the layer to be corrected. On these regions, we optimized the metric and determined aberrations individually.

The best phase function of each sub-volume was used to correct this sub-volume, and all sub-volumes were finally stitched together, resulting in a single aberration-corrected volume. On a dual Intel Xeon E5620 with 2.4 GHz, the determination of the phase function took 38 s for each region, and the subsequent correction of the aberrations of the sub-volume took 8 s.

The resulting volumes still showed a slight curvature and tilt of the retina. For en-face visualization of the photoreceptor layer, it was segmented. This segmentation was then used to align the photoreceptors to a uniform depth.

*In vivo* investigations were performed using two healthy volunteers. Compliance with the maximum permissible exposure of the retina and all relevant safety rules was confirmed by the responsible safety officer.

To demonstrate the correction of strong defocus, we imaged the eye of a healthy subject with a myopia of $-5\text{\hspace{0.17em}}\mathrm{dpt}$ and an astigmatism of $-0.75\text{\hspace{0.17em}}\mathrm{dpt}$. As there was no focus adjustment in the setup, the acquired B-scans showed severe lateral blurring [Fig. 3(a)]. After optimizing the defocus parameter for a region of $196\times 196\times 8$ voxels and applying the correction to the whole volume, lateral structures such as vessels became visible in Fig. 3(b) (yellow arrow). As no correction for astigmatism was performed, small structures are still blurred. The slight curvature of the vessel shadows in the image is caused by lateral eye motion over the acquisition time of 1.3 s.

The eye of a second subject with a myopia of $-0.5\text{\hspace{0.17em}}\mathrm{dpt}$ and an astigmatism of $-0.75\text{\hspace{0.17em}}\mathrm{dpt}$ was imaged with a dilated pupil, which allowed imaging with a NA of 0.18, corresponding to a diffraction-limited lateral resolution of 2.9 μm.

Two fields, centered on 3.5° and 8° eccentric to the macula, were imaged, both extending 5°.

Figure 4(a) shows the aberration corrected en-face image of the photoreceptor layer of the two stitched volumes. Prior to numerical aberration correction, speckle noise, but hardly any structures, are visible [Figs. 4(c)–4(e)].

We then applied the computational aberration correction algorithm to compensate for ocular aberrations. In the resulting en-face image of the photoreceptor layer and individual photoreceptors became visible [Figs. 4(a) and 4(g)–4(i)]. We also applied the aberration correction algorithm to a second layer which revealed small retinal vessels [Fig. 4(j)]. As expected, the determined Zernike coefficients for both layers [Figs. 4(n) and 4(o)] mainly differ in defocus.

The spacing of the cone mosaic was measured from the diameter of the Yellott’s rings [17] in the Fourier transform, shown in Figs. 4(k)–4(m). For the temporal eccentricities of 8°, 6°, and 4°, this resulted in a spacing of 10.5 μm, 9 μm, and 8.2 μm, respectively, which corresponds well with expected values from literature [18]. At 3.0°, the Yellott’s rings were hardly visible (image not shown), marking the limit where the cone mosaic is resolvable in the measurements.

Although photoreceptors were imaged without adaptive optics in healthy eyes after correction of defocus [19–21], the photoreceptor mosaic can usually not be resolved in ametropic eyes without hardware compensation of ametropia or numerical correction. Therefore, previous systems capable of imaging the photoreceptor mosaic were challenging. OA-FF-TD-OCT combines a simple cost-effective setup with the lateral phase stability needed for computational aberration correction. We demonstrated defocusing and ocular aberration correction in healthy volunteers and successfully imaged the photoreceptor-cone pattern.

Despite some disadvantages such as increased computational complexity, reduced sampling efficiency, and the vulnerability to multiple scattered photons, our spatially coherent full-field technique has some advantages over confocal imaging [19,22] or laterally incoherent full-field OCT [23]. The latter two techniques reject out-of-focus or aberrated photons and, thus, suffer from reduced SNR in the presence of aberrations. Due to spatially coherent wide-field illumination and parallel detection in our setup, aberrated photons are also detected if their point spread functions fully reach the camera. Any distortions in the interference, as introduced by the aberrations, are fully correct by the presented procedure. Therefore, we maintain the aberration-free SNR. Thus, our simple approach allows for computational correction, even of strong aberrations in post-processing without loss of sensitivity.

## Funding

Bundesministerium für Bildung und Forschung (BMBF) (13N13763).

## REFERENCES

**1. **Y. Jian, S. Lee, M. J. Ju, M. Heisler, W. Ding, R. J. Zawadzki, S. Bonora, and M. V. Sarunic, Sci. Rep. **6**, 27620 (2016). [CrossRef]

**2. **M. Salas, M. Augustin, L. Ginner, A. Kumar, B. Baumann, R. Leitgeb, W. Drexler, S. Prager, J. Hafner, U. Schmidt-Erfurth, and M. Pircher, Biomed. Opt. Express **8**, 207 (2017). [CrossRef]

**3. **B. Hermann, E. J. Fernández, A. Unterhuber, H. Sattmann, A. F. Fercher, W. Drexler, P. M. Prieto, and P. Artal, Opt. Lett. **29**, 2142 (2004). [CrossRef]

**4. **J. Liang, D. R. Williams, and D. T. Miller, J. Opt. Soc. Am. A **14**, 2884 (1997). [CrossRef]

**5. **Z. Liu, K. Kurokawa, F. Zhang, J. J. Lee, and D. T. Miller, Proc. Natl. Acad. Sci. USA **114**, 12803 (2017). [CrossRef]

**6. **O. P. Kocaoglu, Z. Liu, F. Zhang, K. Kurokawa, R. S. Jonnal, and D. T. Miller, Biomed. Opt. Express **7**, 4554 (2016). [CrossRef]

**7. **N. D. Shemonski, F. A. South, Y.-Z. Liu, S. G. Adie, P. Scott Carney, and S. A. Boppart, Nat. Photonics **9**, 1 (2015). [CrossRef]

**8. **P. Pande, Y.-Z. Liu, F. A. South, and S. A. Boppart, Opt. Lett. **41**, 3324 (2016). [CrossRef]

**9. **D. Hillmann, H. Spahr, C. Hain, H. Sudkamp, G. Franke, C. Pfäffle, C. Winter, and G. Hüttmann, Sci. Rep. **6**, 35209 (2016). [CrossRef]

**10. **A. Kumar, W. Drexler, and R. A. Leitgeb, Opt. Express **21**, 10850 (2013). [CrossRef]

**11. **L. Ginner, A. Kumar, D. Fechtig, L. M. Wurster, M. Salas, M. Pircher, and R. A. Leitgeb, Optica **4**, 924 (2017). [CrossRef]

**12. **L. Ginner, T. Schmoll, A. Kumar, M. Salas, N. Pricoupenko, L. M. Wurster, and R. A. Leitgeb, Biomed. Opt. Express **9**, 472 (2018). [CrossRef]

**13. **H. Sudkamp, P. Koch, H. Spahr, D. Hillmann, G. Franke, M. Münst, F. Reinholz, R. Birngruber, and G. Hüttmann, Opt. Lett. **41**, 4987 (2016). [CrossRef]

**14. **J. R. Fienup and J. J. Miller, J. Opt. Soc. Am. A **20**, 609 (2003). [CrossRef]

**15. **J. A. Nelder and R. Mead, Comput. J. **7**, 308 (1965). [CrossRef]

**16. **M. R. Hestenes and E. Stiefel, *Methods of Conjugate Gradients for Solving Linear Systems* (NBS, 1952), Vol. 49.

**17. **J. I. Yellott, Vis. Res. **22**, 1205 (1982). [CrossRef]

**18. **C. A. Curcio, K. R. Sloan, R. E. Kalina, and A. E. Hendrickson, J. Comput. Neurol. **292**, 497 (1990). [CrossRef]

**19. **M. Pircher, E. Gotzinger, H. Sattmann, R. A. Leitgeb, and C. K. Hitzenberger, Opt. Express **18**, 13935 (2010). [CrossRef]

**20. **D. T. Miller, D. R. Williams, G. M. Morris, and J. Liang, Vis. Res. **36**, 1067 (1996). [CrossRef]

**21. **A. R. Wade and F. W. Fitzke, Lasers Light Ophthalmol. **8**, 129 (1998).

**22. **M. Pircher, B. Baumann, E. Götzinger, and C. K. Hitzenberger, Opt. Lett. **31**, 1821 (2006). [CrossRef]

**23. **P. Xiao, M. Fink, and A. C. Boccara, Opt. Lett. **41**, 3920 (2016). [CrossRef]