Elsevier

Medical Image Analysis

Volume 19, Issue 1, January 2015, Pages 30-45
Medical Image Analysis

Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope

https://doi.org/10.1016/j.media.2014.07.004Get rights and content

Highlights

  • 3D stereovision digitization delivers persistent input for brain shift compensation.

  • Feature-based algorithm recovers microscope zoom factors used by neurosurgeon.

  • Zoom factors estimated within 0.06 on 4 full-length clinical brain tumor surgery videos.

  • 3D cortical surfaces digitized in near real-time with accuracy in 0.54–1.35 mm range.

Abstract

One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patient’s preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28–0.81 mm range on the phantom object and in the 0.54–1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patient’s preoperative images and facilitate active surgical guidance.

Introduction

Intraoperative soft tissue deformations or shift can produce inaccuracies in the preoperative plan within image-guided surgical (IGS) systems. For instance, in brain tumor surgery, brain shift can produce inaccuracies of 1–2.5 cm in the preoperative plan (Roberts et al., 1998a, Nimsky et al., 2000, Hartkens et al., 2003). Furthermore, such inaccuracies are compounded by surgical manipulation of the soft tissue. These real-time intraoperative issues make realizing accurate correspondence between the physical state of the patient and their preoperative images challenging in IGS systems. To address these intraoperative issues, several forms of intraoperative imaging modalities have been used as data to characterize soft tissue deformation in IGS systems. Based on the modality used, intraoperative tissue deformation compensation methods can be categorized as: (1) partial or complete volume tomographic intraoperative imaging of the organ undergoing deformation and (2) intraoperative 3D digitization of points on the organ surface, the primary focus of this article. Tomographic imaging modalities such as intraoperative computed tomography (iCT) (King et al., 2013), intraoperative MR (iMR), and intraoperative ultrasound (iUS) have been used to compensate for tissue deformation and shift in hepatectomies (Lange et al., 2004, Bathe et al., 2006, Nakamoto et al., 2007) and neurosurgeries (Butler et al., 1998, Nabavi et al., 2001, Comeau et al., 2000, Letteboer et al., 2005). These types of volumetric imaging modalities provide direct access to the deformed 3D anatomy. However, these modalities are affected by surgical workflow disruption, engendered cost, or poor image contrast.

Employing 3D organ surface data to drive biomechanical models to compute 3D anatomical deformation is an alternative to the compensating for anatomical deformation using the above mentioned volumetric imaging based methods. Recent research has demonstrated that volumetric tissue deformation can be characterized and predicted with reasonable accuracy using organ surface data only (Dumpuri et al., 2010a, Chen et al., 2011, Delorenzo et al., 2012, Rucker et al., 2013). These types of computational models rely on accurate correspondences between digitized 3D surfaces of the soft-tissue organ taken at various time points in the surgery. Certainly, persistent delivery of 3D organ surface measurements to this type of model-update framework can realize an active IGS system capable of delivering guidance in close to real time. Organ surface data and measurements to drive these computational models can be obtained using textured laser range scanners (tLRS), conoscopic holography (Simpson et al., 2012), and stereovision systems. All of these modalities deliver geometric measurements of the organ surfaces in the field of view (FOV) as 3D points or a point cloud. In the case of tLRS and stereovision, the point clouds carry color information making them textured. These modalities allow for an inexpensive alternative to 3D tomographic imaging modalities and provide an immediate non-contact method of digitizing 3D points in a FOV. With these types of 3D organ surface digitization and measurement techniques, the required input can be supplied to the patient-specific biomechanical computational framework to compensate for soft tissue deformations in IGS systems with minimal surgical workflow disruption. In this paper, we compare the point clouds obtained by the tLRS and the developed stereovision system capable of digitizing points under varying magnifications and movements of the operating microscope.

Optically tracked tLRS have been used to reliably digitize surfaces or point clouds to drive biomechanical models for compensation of intraoperative brain shift and intraoperative liver tissue deformation (Cash et al., 2007, Dumpuri et al., 2010a, Dumpuri et al., 2010b, Chen et al., 2011, Rucker et al., 2013). The tLRS can digitize points with sub-millimetric accuracy within a root mean square (RMS) error of 0.47 mm (Pheiffer et al., 2012). While the tLRS provides valuable intraoperative information for brain tumor surgery, establishing correspondences between temporally sparse digitized organ surfaces is challenging and makes computing intermediate updates for brain tumor surgery even more challenging (Ding et al., 2011).

Stereovision systems of operating microscopes can remedy the deficiencies of the tLRS by providing temporally dense 3D digitization of organ surfaces to drive the patient-specific biomechanical soft-tissue compensation models. Initial work in a similar vein has been done with respect to using an operating microscope for visualizing critical anatomy virtually in the surgical FOV for neurosurgery and otolaryngology surgery (King et al., 1999, Edwards et al., 2000). In this augmented reality microscope-assisted guided intervention platform, bivariate polynomials for camera calibration (Willson, 1994) are used with a given zoom and focus input setting for establishing the correct 3D position of critical anatomies overlays. Figl et al. (2005) developed a fully automatic calibration method for an optical see-through head-mounted operating microscope for the full range of zoom and focal length settings, where a special calibration pattern is used. In this presented work, we use standard camera calibration techniques (Zhang, 2000) with a content-based approach and do not separate the zoom and focal length settings of the microscope’s optics as done in Willson, 1994, Figl et al., 2005. Our method is based on estimating the magnification being used by neurosurgeons. This magnification is the result of a combination of using the zoom and/or focal length adjustment functions on the operating microscope.

Although stereovision techniques are often used for surface reconstruction in computer-assisted laparoscopic surgeries (Maier-Hein et al., 2013, Maier-Hein et al., in press), in this paper, we focus on three stereovision systems that have been used for brain shift correction using biomechanical models. These stereovision systems are housed externally or internally within the operating microscope, which is used routinely in neurosurgeries. The 3D digitization of the organ surface present in the operating microscope’s FOV can be accomplished using stereovision theory. The first system uses stereo-pair cameras attached externally to the operating microscope optics (Sun et al., 2005a, Sun et al., 2005b, Ji et al., 2010). This setup renders the assistant ocular arm unusable when the cameras are powered on. Often, the assistant ocular arm of the microscope is used as a teaching tool. This limits the acquisition of temporally dense cortical surface measurements. The second stereovision system also uses an external stereo-pair camera system attached to the operating microscope. This system relies on a game-theoretic approach for combining intensity information in the operating microscope’s FOV to digitize 3D points (DeLorenzo et al., 2007, DeLorenzo et al., 2010). The system relies on manually delineated sulcal features on the cortical surface for computing 3D surfaces or point clouds using the developed game-theoretic framework. Similar to the disadvantages shouldered by the tLRS, the temporally sparse data from these two stereovision systems make establishing correspondence for driving the model-update framework challenging. Paul et al. (2005) developed the third stereovision system. This system uses external cameras and is capable of displaying 3D reconstructed cortical surfaces registered to the patient’s preoperative images for surgical visualization. In Paul et al. (2009), the stereovision aspect of this system has been extended for registering 3D cortical surfaces acquired by the stereo-pair cameras for computing cortical deformations. One of the major unaddressed issues in these three stereovision systems is the acquisition of reliable and accurate point clouds from the microscope under varying magnifications and microscope movements for the duration of a typical brain tumor surgery, approximately 1 h.

During neurosurgery, the surgeon frequently moves the head of the operating microscope and zooms in and out of the surgical site to effectively manipulate the organ surface to perform the surgery. The magnification function of the operating microscope is a combination of changes in zooms and focal lengths of the complex optical system housed inside the head of the operating microscope. The unknown head movements and magnification changes alter the determined camera calibration parameters at the pixel level, cause calibration drift, and consequently, result in inaccurate point clouds. Several popular methods for self-calibration of cameras have been developed (Hemayed, 2003), where an initial camera calibration is not performed.

In published methods, the stereo-pair cameras are either recalibrated or the operating microscope’s optics are readjusted to the initial calibration state for the stereo-pair cameras when a point cloud needs to be obtained during the surgery. Overall, the inability to persistently and robustly digitize points on the organ surface accurately for the duration of the neurosurgery has been one of the considerable barriers to widespread adoption of the operating microscope as a temporally dense intraoperative digitization platform. As a result, the development of an active IGS system capable of soft tissue surgical guidance to the clinical armamentarium has been slowed.

In this article, we develop a practical microscope-based digitization platform capable of near real-time intraoperative digitization of 3D points in the FOV under varying magnification settings and physical movements of the microscope. Our stereovision camera system is internal to the operating microscope and this keeps modifications and disruptions to the surgical workflow at a minimum. With this intraoperative microscope-based stereovision system, the surgeon can perform the surgery uninterrupted while the video streams from the left and right cameras get acquired. Furthermore, the assistant ocular arm of the operating microscope is still usable. Preliminary work comparing the accuracy of point clouds obtained from such a microscope-based stereovision system against the point clouds obtained from the tLRS on CAD phantom objects has been presented in our previous work (Kumar et al., 2013).

In this paper, we (1) build on the real-time stereovision system of Kumar et al. (2013) to robustly handle varying magnifications and physical movements of the microscope’s head based on a content-based approach. We (2) compare the theoretical magnification of the microscope’s optical system to the magnification computed from our near real-time algorithm; and (3) evaluate the accuracy of the digitization of 3D points using this intraoperative microscope-based digitization platform against the gold standard tLRS on a CAD designed cortical surface phantom object and cortical surfaces from 4 full-length clinical brain tumor surgery cases conducted at Vanderbilt University Medical Center (VUMC). To the best of our knowledge, fully automatic near real-time 3D digitization of points in the FOV using an operating microscope subject to unknown magnification settings has not been previously reported. Our fully automatic intraoperative microscope-based digitization platform does not require any manual intervention after a one-time initial stereo-pair calibration stage and can robustly perform under realistic neurosurgical conditions of large magnification changes and microscope head movements during the surgery. Additionally, we validate our methods on full-length highly dynamic neurosurgical videos that last over an hour, the typical duration of a brain tumor surgery. This type of extensive validation of an automatic digitization method has not been done before to the best of our knowledge. Validations on earlier digitization methods (Sun et al., 2005a, DeLorenzo et al., 2007, Paul et al., 2009, Ding et al., 2011) have dealt with short video sequences (∼3 to 5 min) acquired at sparse time points and rely on manual initializations. Furthermore, we perform a study comparing the surface digitization accuracy of the stereo-pair in the operating microscope against the gold standard tLRS on 4 clinical cases. Overall, we demonstrate a clinical microscope-based digitization platform capable of reliably providing temporally dense 3D textured point clouds in near real-time of the FOV for the entire duration and under realistic conditions of neurosurgery.

Section snippets

Materials and methods

Section 2.1 describes the equipment and CAD models used for acquiring and evaluating stereovision data. Section 2.2 and 2.3 explain the digitization of 3D points using the operating microscope under fixed and varying magnification settings respectively.

Magnification factor evaluation

In this section, we present the estimation of the magnification factors using the presented algorithm and compare it to the theoretical magnification factor. The magnification factor used on the phantom object and the VUMC clinical cases is computed using the Pentero microscope’s left camera video stream. The Pentero microscope displays the primary objective’s focal length, fO, and the Galilean magnification, γ, and the theoretical values of αij and α0k can be computed from Eqs. (4), (5). Table

Discussion

The presented algorithm for automatically estimating the magnification factor of the operating microscope is built on a content-based approach. This approach relies on the temporal persistence of features in the FOV of the microscope, which is a reasonable assumption in neurosurgery. The organ surface is very rich in features for determining SURF keypoints and stereo video acquisition can seamlessly provide the temporal persistence needed to estimate the unknown magnification settings and

Conclusion

The proposed non-contact intraoperative microscope-based 3D digitization system has an error in the range of 0.28–0.81 mm on the cortical surface phantom object and 0.54–1.35 mm on clinical brain tumor surgery cases. These errors were computed based on surface-to-surface distance measures between point clouds obtained from the tLRS and the operating microscope’s stereovision system. These ranges of accuracy are acceptable for neurosurgical guidance applications. Our system is able to

Acknowledgements

National Institute for Neurological Disorders and Stroke Grant R01-NS049251 funded this research. We acknowledge Vanderbilt University Medical Center, Department of Neurosurgery, EMS Prototyping, Stefan Saur of Carl Zeiss Meditec Inc., and The Armamentarium, Inc. (Houston TX, USA) for providing the needed equipment and resources.

References (63)

  • W.E. Butler et al.

    A mobile computed tomographic scanner with intraoperative and intensive care unit applications

    Neurosurgery

    (1998)
  • Chang, P.-L., Stoyanov, D., Davison, A.J., Edwards, P.E., 2013. Real-time dense stereo reconstruction using convex...
  • I. Chen et al.

    Intraoperative brain shift compensation: accounting for dural septa

    IEEE Trans. Biomed. Eng.

    (2011)
  • R.M. Comeau et al.

    Intraoperative ultrasound for guidance and tissue shift correction in image-guided neurosurgery

    Med. Phys.

    (2000)
  • DeLorenzo, C., Papademetris, X., Staib, L. H., Vives, K.P., Spencer, D.D., Duncan, J.S., 2007. Nonrigid intraoperative...
  • C. DeLorenzo et al.

    Image-guided intraoperative cortical deformation recovery using game theory: application to neocortical epilepsy surgery

    IEEE Trans. Med. Imaging

    (2010)
  • C. Delorenzo et al.

    Volumetric intraoperative brain deformation compensation: model development and phantom validation

    IEEE Trans. Med. Imaging

    (2012)
  • S. Ding et al.

    Semiautomatic registration of pre- and postbrain tumor resection laser range data: method and validation

    IEEE Trans. Biomed. Eng.

    (2009)
  • S. Ding et al.

    Tracking of vessels in intra-operative microscope video sequences for cortical displacement estimation

    IEEE Trans. Biomed. Eng.

    (2011)
  • P. Dumpuri et al.

    A fast and efficient method to compensate for brain shift for tumor resection therapies measured between preoperative and postoperative tomograms

    IEEE Trans. Biomed. Eng.

    (2010)
  • P. Edwards et al.

    Design and evaluation of a system for microscope-assisted guided interventions (MAGI)

    IEEE Trans. Med. Imaging

    (2000)
  • M. Figl et al.

    A fully automated calibration method for an optical see-through head-mounted operating microscope with variable zoom and focus

    IEEE Trans. Med. Imaging

    (2005)
  • S. Giannarou et al.

    Probabilistic tracking of affine-invariant anisotropic regions

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2012)
  • A. Goshtasby

    Registration of images with geometric distortions

    IEEE Trans. Geosci. Remote Sens.

    (1988)
  • T. Hartkens et al.

    Measurement and analysis of brain deformation during neurosurgery

    IEEE Trans. Med. Imaging

    (2003)
  • R.I. Hartley

    Theory and practice of projective rectification

    Int. J. Comput. Vision

    (1999)
  • R. Hartley et al.

    Multiple View Geometry in Computer Vision

    (2004)
  • Hemayed, E.E., 2003. A survey of camera self-calibration. In: Proceedings of the IEEE Conference on Advanced Video and...
  • H. Hirschmuller

    Stereo processing by semiglobal matching and mutual information

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2008)
  • M. Hu et al.

    Reconstruction of a 3D surface from video that is robust to missing data and outliers: application to minimally invasive surgery using stereo and mono endoscopes

    Med. Image Anal.

    (2010)
  • Ji, S., Fan, X., Roberts, D.W., Hartov, A., Paulsen, K.D., 2010. An integrated model-based neurosurgical guidance...
  • View full text