Skip to main content

Über dieses Buch

In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2019 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m.



Abstract: Anchor-Constrained Plausibility

A Novel Concept for Assessing Tractography and Reducing False-Positives

The problem of false positives in fiber tractography is one of the grand challenges in the research area of diffusion-weighted magnetic resonance imaging (dMRI). Facing fundamental ambiguities especially in bottleneck situations, tractography generates huge numbers of theoretically possible candidate tracts. Only a fraction of these candidates is likely to correspond to the true fiber configuration, posing a difficult sensitivity-specificity trade-off.

Peter F. Neher, Bram Stieltjes, Klaus H. Maier-Hein

Automatic Detection of Blood Vessels in Optical Coherence Tomography Scans

The aim of this research is to develop a new automated blood vessel (BV) detection algorithm for optical coherence tomography (OCT) scans and corresponding fundus images. The algorithm provides a robust method to detect BV shadows (BVSs) using Radon transformation and other supporting image processing methods. The position of the BVSs is determined in OCT scans and the BV thickness is measured in the fundus images. Additionally, the correlation between BVS thickness and retinal nerve fiber layer (RNFL) thickness is determined. This correlation is of great interest since glaucoma, for example, can be identified by a loss of RNFL thickness.

Julia Hofmann, Melanie Böge, Szymon Gladysz, Boris Jutzi

Prediction of Liver Function Based on DCE-CT

Liver function analysis is crucial for staging and treating chronic liver diseases (CLD). Despite CLD being one of the most prevalent diseases of our time, research regarding liver in the Medical Image Computing community is often focused on diagnosing and treating CLD’s long term effects such as the occurance of malignancies, e.g. hepatocellular carcinoma. The Child-Pugh (CP) score is a surrogate for liver function used to quantify liver cirrhosis, a common CLD, and consists of 3 disease progression stages A, B and C. While a correlation between CP and liver specific contrast agent uptake for dynamic conrast enhanced (DCE)-MRI has been found, no such correlation has been shown for DCE-CT scans, which are more commonly used in clinical practice. Using a transfer learning approach, we train a CNN for prediction of CP based on DCE-CT images of the liver alone. Agreement between the achieved CNN based scoring and ground truth CP scores is statistically significant, and a rank correlation of 0.43, similar to what is reported for DCE-MRI, was found. Subsequently, a statistically significant CP classifier with an overall accuracy of 0.57 was formed by employing clinically used cutoff values.

Oliver Rippel, Daniel Truhn, Johannes Thüring, Christoph Haarburger, Christiane K. Kuhl, Dorit Merhof

Abstract: Adversarial Examples as Benchmark for Medical Imaging Neural Networks

Deep learning has been widely adopted as the solution of choice for a plethora of medical imaging applications, due to its state-of-the-art performance and fast deployment. Traditionally, the performance of a deep learning model is evaluated on a test dataset, originating from the same distribution as the training set. This evaluation method provides insight regarding the generalization ability of a model.

Magdalini Paschali, Sailesh Conjeti, Fernando Navarro, Nassir Navab

Evaluation of Image Processing Methods for Clinical Applications

Mimicking Clinical Data Using Conditional GANs

While developing medical image applications, their accuracy is usually evaluated on a validation dataset, that generally differs from the real clinical data. Since clinical data does not contain ground truth annotations, it is impossible to approximate the real accuracy of the method. In this work, a cGAN-based method to generate realistically looking clinical data preserving the topology and thus ground truth of the validation set is presented. On the example of image registration of brain MRIs, we emphasize the necessity for the method and show that it enables evaluation of the accuracy on a clinical dataset. Furthermore, the topology preserving and realistic appearance of the generated images are evaluated and considered to be sufficient.

Hristina Uzunova, Sandra Schultz, Heinz Handels, Jan Ehrhardt

Abstract: Some Investigations on Robustness of Deep Learning in Limited Angle Tomography

In computed tomography, image reconstruction from an insufficient angular range of projection data is called limited angle tomography. Due to missing data, reconstructed images suffer from artifacts, which cause boundary distortion, edge blurring, and intensity biases. Recently, deep learning methods have been applied very successfully to this problem in simulation studies.

Yixing Huang, Tobias Würfl, Katharina Breininger, Ling Liu, Günter Lauritsch, Andreas Maier

Abstract: nnU-Net: Self-adapting Framework for U-Net-Based Medical Image Segmentation

The U-Net was presented in 2015. With its straight-forward and successful architecture it quickly evolved to a commonly used benchmark in medical image segmentation. The adaptation of the U-Net to novel problems, however, comprises several degrees of freedom regarding the exact architecture, preprocessing, training and inference.

Fabian Isensee, Jens Petersen, Andre Klein, David Zimmerer, Paul F. Jaeger, Simon Kohl, Jakob Wasserthal, Gregor Koehler, Tobias Norajitra, Sebastian Wirkert, Klaus H. Maier-Hein

Deep Multi-Modal Encoder-Decoder Networks for Shape Constrained Segmentation and Joint Representation Learning

Deep learning approaches have been very successful in segmenting cardiac structures from CT and MR volumes. Despite continuous progress, automated segmentation of these structures remains challenging due to highly complex regional characteristics (e.g. homogeneous gray-level transitions) and large anatomical shape variability. To cope with these challenges, the incorporation of shape priors into neural networks for robust segmentation is an important area of current research. We propose a novel approach that leverages shared information across imaging modalities and shape segmentations within a unified multi-modal encoder-decoder network. This jointly end-to-end trainable architecture is advantageous in improving robustness due to strong shape constraints and enables further applications due to smooth transitions in the learned shape space. Despite no skip connections are used and all shape information is encoded in a low-dimensional representation, our approach achieves high-accuracy segmentation and consistent shape interpolation results on the multi-modal whole heart segmentation dataset.

Nassim Bouteldja, Dorit Merhof, Jan Ehrhardt, Mattias P. Heinrich

Abstract: Fan-to-Parallel Beam Conversion

Deriving Neural Network Architectures Using Precision Learning

In this paper, we derive a neural network architecture based on an analytical formulation of the parallel-to-fan beam conversion problem following the concept of precision learning [1]. Up to now, this precision learning approach was only used to augment networks with prior knowledge and or to add more flexibility into existing algorithms. We want to extent this approach: we demonstrate that we can drive a mathematical model to tackle a problem under consideration and use deep learning to formulate different hypothesis on efficient solution schemes that are then found as the point of optimality of a deep learning training process.

Christopher Syben, Bernhard Stimpel, Jonathan Lommen, Tobias Würfl, Arnd Dörfler, Andreas Maier

Abstract: Tract Orientation Mapping for Bundle-Specific Tractography

While the major white matter tracts are of great interest to numerous studies in neuroscience and medicine, their manual dissection in larger cohorts from diffusion MRI tractograms is time-consuming, requires expert knowledge and is hard to reproduce. Tract orientation mapping (TOM) is a novel concept that facilitates bundle-specific tractography based on a learned mapping from the original fiber orientation distribution function (fODF) peaks to a list of tract orientation maps (also abbr. TOM). Each TOM represents one of the known tracts with each voxel containing no more than one orientation vector.

Jakob Wasserthal, Peter F. Neher, Klaus H. Maier-Hein

Segmentation of Vertebral Metastases in MRI Using an U-Net like Convolutional Neural Network

This study’s objective was to segment vertebral metastases in diagnostic MR images by using a deep learning-based approach. Segmentation of such lesions can present a pivotal step towards enhanced therapy planning and implementation of minimally-invasive interventions like radiofrequency ablations. For this purpose, we used a U-Net-like architecture trained with 38 patient-cases. Our proposed method has been evaluated by comparison to expertly annotated lesion segmentations via Dice coeffcients, sensitivity and specificity rates. While the experiments with T1-weighted MRI images yielded promising results (average Dice score of 73:84 %), T2-weighted images were in average rather insufficient (53:02 %). To our best knowledge, our proposed study is the first to tackle this particular issue, which limits direct comparability with related works. In respect to similar deep learning-based lesion segmentations, e.g. in liver MR images or spinal CT images, our experiments with T1-weighted MR images show similar or in some respects superior segmentation quality.

Georg Hille, Max Dünnwald, Mathias Becker, Johannes Steffen, Sylvia Saalfeld, Klaus Tönnies

Synthetic Training with Generative Adversarial Networks for Segmentation of Microscopies

Medical imaging is often burdened with small available annotated data. In case of supervised deep learning algorithms a large amount of data is needed. One common strategy is to augment the given dataset for increasing the amount of training data. Recent researches show that the generation of synthetic images is a possible strategy to expand datasets. Especially, generative adversarial networks (GAN)s are promising candidates for generating new annotated training images. This work combines recent architectures of Generative Adversarial Networks in one pipeline to generate medical original and segmented image pairs for semantic segmentation. Results of training a U-Net with incorporated synthetic images as addition to common data augmentation are showing a performance boost compared to training without synthetic images from 77.99% to 80.23% average Jaccard Index.

Jens Krauth, Stefan Gerlach, Christian Marzahl, Jörn Voigt, Heinz Handels

Gradient-Based Expanding Spherical Appearance Models for Femoral Model Initialization in MRI

While deep learning strategies for semantic segmentation increasingly take center stage, traditional approaches seem to take a backseat. However, in the domain of medical image processing, labeled training data is rare and expensive to acquire. Thus, traditional methods may still be preferable to deep learning approaches. Many of these conventional approaches often require initial localization of the structure of interest (SOI) to provide satisfactory results. In this work we present a fully automatic model initialization approach in MRI, that is applicable for anatomical structures that contain a near-spherical component. We propose a model, that encapsulates the difference between intensity distribution within the SOI’s spherical component and its proximity. We present our approach on the example of femoral model initialization and compare our initialization results to a diffeomorphic demons registration approach.

Duc Duy Pham, Gurbandurdy Dovletov, Sebastian Warwas, Stefan Landgraeber, Marcus Jäger, Josef Pauli

Deep Segmentation Refinement with Result-Dependent Learning

A Double U-Net for Hip Joint Segmentation in MRI

In this contribution, we propose a 2D deep segmentation refinement approach, that is inspired by the U-Net architecture and incorporates result-dependent loss adaptation. The performance of our method regarding segmentation quality is evaluated on the example of hip joint segmentation in T1-weighted MRI data sets. The results are compared to an ordinary U-Net implementation. While the segmentation quality of the proximal femur does not significantly change, our proposed method shows promising improvements for the segmentation of the pelvic bone complex, which shows more shape variability in the 2D image slices along the longitudinal axis.

Duc Duy Pham, Gurbandurdy Dovletov, Sebastian Warwas, Stefan Landgraeber, Marcus Jäger, Josef Pauli

Abstract: Automatic Estimation of Cochlear Duct Length and Volume Size

The exact cochlear length and size are required is an important factor of selecting the suitable cochlear implant. We present a fast cochlear length and volume size estimation method from clinical multi-modal medical images. The method utilizes atlas-model-based segmentation to estimate a transformation from a model to an input volume. The result is used to transform a well-defined segmentation and a points-set of a scala tympani to the input image that segments and estimates the scala tympani length in a few seconds using standard hardware e.g. a laptop.

Ibraheem Al-Dhamari, Sabine Bauer, Dietrich Paulus, Rania Hilal, Friedrich Lissek, Roland Jacob

Interactive Neural Network Robot User Investigation for Medical Image Segmentation

Interactive image segmentation bears the advantage of correctional updates to the current segmentation mask when compared to fully automated systems. Especially in the field of inter-operative medical image processing of a single patient, where a high accuracy is an uncompromisable necessity, a human operator guiding a system towards an optimal segmentation result is a time-efficient constellation benefiting the patient. There are recent categories of neural networks which can incorporate human-computer interaction (HCI) data as additional input for segmentation. In this work, we simulate this HCI data during training with state-of-the-art user models, also called robot users, which aim to act similar to real users given interactive image segmentation tasks. We analyze the influence of chosen robot users, which mimic different types of users and scribble patterns, on the segmentation quality. We conclude that networks trained with robot users with the most spread out seeding patterns generalize well during inference with other robot users.

Mario Amrehn, Maddalena Strumia, Markus Kowarschik, Andreas Maier

Tracing of Nerve Fibers Through Brain Regions of Fiber Crossings in Reconstructed 3D-PLI Volumes

Three-dimensional (3D) polarized light imaging (PLI) is able to reveal nerve fibers in the human brain at microscopic resolution. While most nerve fiber structures can be accurately visualized with 3D-PLI, the currently used physical model (based on Jones Calculus) is not well suited to distinguish steep fibers from specific fiber crossings. Hence, streamline tractography algorithms tracing fiber pathways get easily misdirected in such brain regions. For the presented study, we implemented and applied two methods to bridge areas of fiber crossings: (i) extrapolation of fiber points with cubic splines and (ii) following the most frequently occurring orientations in a defined neighborhood based on orientation distribution functions gained from 3D-PLI measurements (pliODFs). Applied to fiber crossings within a human hemisphere, reconstructed from 3D-PLI measurements at 64 microns in-pane resolution, both methods were demonstrated to sustain their initial tract direction throughout the crossing region. In comparison, the ODF-method offered a more reliable bridging of the crossings with less gaps.

Marius Nolden, Nicole Schubert, Daniel Schmitz, Andreas Müller, Markus Axer

Dilated Deeply Supervised Networks for Hippocampus Segmentation in MRI

Tissue loss in the hippocampi has been heavily correlated with the progression of Alzheimer’s Disease (AD). The shape and structure of the hippocampus are important factors in terms of early AD diagnosis and prognosis by clinicians. However, manual segmentation of such subcortical structures in MR studies is a challenging and subjective task. In this paper, we investigate variants of the well known 3D U-Net, a type of convolution neural network (CNN) for semantic segmentation tasks.We propose an alternative form of the 3D U-Net, which uses dilated convolutions and deep supervision to incorporate multi-scale information into the model. The proposed method is evaluated on the task of hippocampus head and body segmentation in an MRI dataset, provided as part of the MICCAI 2018 segmentation decathlon challenge. The experimental results show that our approach outperforms other conventional methods in terms of different segmentation accuracy metrics.

Lukas Folle, Sulaiman Vesal, Nishant Ravikumar, Andreas Maier

Automatic Detection and Segmentation of the Acute Vessel Thrombus in Cerebral CT

Intervention time plays a very important role for stroke outcome and affects different therapy paths. Automatic detection of an ischemic condition during emergency imaging could draw the attention of a radiologist directly to the thrombotic clot. Considering an appropriate early treatment, the immediate automatic detection of a clot could lead to a better patient outcome by reducing time-to-treatment. We present a two-stage neural network to automatically segment and classify clots in the MCA+ICA region for a fast pre-selection of positive cases to support patient triage and treatment planning. Our automatic method achieves an area under the receiver operating curve (AUROC) of 0:99 for the correct positive/negative classification on unseen test data.

Christian Lucas, Jonas J. Schöttler, André Kemmling, Linda F. Aulmann, Mattias P. Heinrich

Sparsely Connected Convolutional Layers in CNNs for Liver Segmentation in CT

Convolutional neural networks are currently the best working solution for automatic liver segmentation. Generally, each convolutional layer processes all feature maps from the previous layer. We show that the introduction of sparsely connected convolutional layers into the U-Net architecture can benefit the quality of liver segmentation and results in the increase of the dice coeffcient by 0:32% and a reduction of the mean surface distance by 3:84 mm on the LiTS data. Evaluation on the IRCAD data set with the application of post-processing showed a 0:70% higher Dice coeffcient and a 0:26 mm lower mean surface distance.

Alena-Kathrin Schnurr, Lothar R. Schad, Frank G. Zöllner

Smooth Ride: Low-Pass Filtering of Manual Segmentations Improves Consensus

In this paper, we investigate slice-wise manual segmentation of knee anatomy. Due to high inter-rater variability between annotators, often a high number of raters is required to obtain a reliable ground truth consensus. We conducted an extensive study in which cartilage surface was segmented manually by six annotators on three scans of the knee. The slice-wise annotation results in high-frequency artifact that can be reduced by averaging over the segmentations of the annotators. A similar effect can also be obtained by smoothing the surface using low-pass filtering. In our results, we demonstrate that such filtering increases the consistency of the annotation of all raters. Furthermore, due to the smoothness of the cartilage surface, strong filtering produces surfaces that show differences to the ground truth that are in the same order of magnitude as the inter-rater variation. The remaining root mean squared error lies in the range of 0:11 to 0:14 mm. These findings show that appropriate pre-processing techniques result in segmentations close to the consensus of multiple raters, suggesting that in the future fewer annotators are required to achieve a reliable segmentation.

Jennifer Maier, Marianne Black, Mary Hall, Jang-Hwan Choi, Marc Levenston, Garry Gold, Rebecca Fahrig, Bjoern Eskofier, Andreas Maier

User Loss A Forced-Choice-Inspired Approach to Train Neural Networks Directly by User Interaction

In this paper, we investigate whether is it possible to train a neural network directly from user inputs. We consider this approach to be highly relevant for applications in which the point of optimality is not well-defined and user-dependent. Our application is medical image denoising which is essential in fluoroscopy imaging. In this field every user, i.e. physician, has a different flavor and image quality needs to be tailored towards each individual. To address this important problem, we propose to construct a loss function derived from a forced-choice experiment. In order to make the learning problem feasible, we operate in the domain of precision learning, i.e., we inspire the network architecture by traditional signal processing methods in order to reduce the number of trainable parameters. The algorithm that was used for this is a Laplacian pyramid with only six trainable parameters. In the experimental results, we demonstrate that two image experts who prefer different filter characteristics between sharpness and de-noising can be created using our approach. Also models trained for a specific user perform best on this users test data. This approach opens the way towards implementation of direct user feedback in deep learning and is applicable for a wide range of application.

Shahab Zarei, Bernhard Stimpel, Christopher Syben, Andreas Maier

Sodium Image Denoising Based on a Convolutional Denoising Autoencoder

Sodium Magnetic Resonance Imaging (sodium MRI) is an imaging modality that has gained momentum over the past decade, because of its potential ability to become a biomarker for several diseases, ranging from cancer to neurodegenerative pathologies, along with monitoring of tissues metabolism. One of the most important limitation to the exploitation of this imaging modality is its characteristic low resolution and signal-to-noise-ratio as compared to the classical proton MRI, which is due to the notably lower concentration of sodium than water in the human body. Therefore, denoising is a central aspect with respect to the clinical use of sodium MRI. In this work, we introduce a Convolutional Denoising Autoencoder that is trained on a training database of thirteen training subjects with three sodium MRI images each. The results illustrate that the denoised images show a strong improvement after application in comparison to the state-of-the-art Non Local Means denoising algorithm. This effect is demonstrated based on different noise metrics and a qualitative evaluation.

Simon Koppers, Edouard Coussoux, Sandro Romanzetti, Kathrin Reetz, Dorit Merhof

Improved X-Ray Bone Segmentation by Normalization and Augmentation Strategies

X-ray images can show great variation in contrast and noise levels. In addition, important subject structures might be superimposed with surgical tools and implants. As medical image datasets tend to be of small size, these image characteristics are often under-represented. For the task of automated, learning-based segmentation of bone structures, this may lead to poor generalization towards unseen images and consequently limits practical application. In this work, we employ various data augmentation techniques that address X-ray-specific image characteristics and evaluate them on lateral projections of the femur bone. We combine those with data and feature normalization strategies that could prove beneficial to this domain. We show that instance normalization is a viable alternative to batch normalization and demonstrate that contrast scaling and the overlay of surgical tools and implants in the image domain can boost the representational capacity of available image data. By employing our best strategy, we can improve the average symmetric surface distance measure by 36:22 %.

Florian Kordon, Ruxandra Lasowski, Benedict Swartman, Jochen Franke, Peter Fischer, Holger Kunze

Multi-Modal Super-Resolution with Deep Guided Filtering

Despite the visually appealing results, most Deep Learning-based super-resolution approaches lack the comprehensibility that is required for medical applications. We propose a modified version of the locally linear guided filter for the application of super-resolution in medical imaging. The guidance map itself is learned end-to-end from multimodal inputs, while the actual data is only processed with known operators. This ensures comprehensibility of the results and simplifies the implementation of guarantees. We demonstrate the possibilities of our approach based on multi-modal MR and cross-modal CT and MR data. For both datasets, our approach performs clearly better than bicubic upsampling. For projection images, we achieve SSIMs of up to 0.99, while slice image data results in SSIMs of up to 0.98 for four-fold upsampling given an image of the respective other modality at full resolution. In addition, end-to-end learning of the guidance map considerably improves the quality of the results.

Bernhard Stimpel, Christopher Syben, Franziska Schirrmacher, Philip Hoelter, Arnd Dörfler, Andreas Maier

Semi-Automatic Cell Correspondence Analysis Using Iterative Point Cloud Registration

In the field of biophysics, deformation of in-vitro model tissues is an experimental technique to explore the response of tissue to a mechanical stimulus. However, automated registration before and after deformation is an ongoing obstacle for measuring the tissue response on the cellular level. Here, we propose to use an iterative point cloud registration (IPCR) method, for this problem. We apply the registration method on point clouds representing the cellular centers of mass, which are evaluated with aWatershed based segmentation of phase-contrast images of living tissue, acquired before and after deformation. Preliminary evaluation of this method on three data sets shows high accuracy, with 82% - 92% correctly registered cells, which outperforms coherent point drift (CPD). Hence, we propose the application of the IPCR method on the problem of cell correspondence analysis.

Shuqing Chen, Simone Gehrer, Sara Kaliman, Nishant Ravikumar, Abdurrahman Becit, Maryam Aliee, Diana Dudziak, Rudolf Merkel, Ana-Sunćana Smith, Andreas Maier

Pediatric Patient Surface Model Atlas Generation and X-Ray Skin Dose Estimation

Fluoroscopy is used in a wide variety of examinations and procedures to diagnose or treat patients in modern pediatric medicine. Although these image guided interventions have many advantages in treating pediatric patients, understanding the deterministic and long term stochastic effects of ionizing radiation are of particular importance for this patient demographic. Therefore, quantitative estimation and visualization of radiation exposure distribution, and dose accumulation over the course of a procedure, is crucial for intra-procedure dose tracking and long term monitoring for risk assessment. Personalized pediatric models are necessary for precise determination of patient-X-ray interactions. One way to obtain such a model is to collect data from a population of pediatric patients, establish a population based generative pediatric model and use the latter for skin dose estimation. In this paper, we generate a population model for pediatric patient using data acquired by two RGB-D cameras from different views. A generative atlas was established using template registration. We evaluated the registered templates and generative atlas by computing the mean vertex error to the associated point cloud. The evaluation results show that the mean vertex error reduced from 25.2 ± 12.9 mm using an average surface model to 18.5 ± 9.4mm using specifically estimated pediatric surface model using the generated atlas. Similarly, the dose estimation error was halved from 10.6 ± 8.5% using the average surface model to 5.9 ± 9.3% using the personalized surface estimates.

Xia Zhong, Philipp Roser, Siming Bayer, Nishant Ravikumar, Norbert Strobel, Annette Birkhold, Tim Horz, Markus Kowarschik, Rebecca Fahrig, Andreas Maier

Blind Rigid Motion Estimation for Arbitrary MRI Sampling Trajectories

In this publication, a new blind motion correction algorithm for magnetic resonance imaging for arbitrary sampling trajectories is presented. Patient motion during partial measurements is estimated. Exploiting the image design, a sparse approximation of the reconstructed image is calculated with the alternating direction method of multipliers. The approximation is used with gradient descent methods with derivatives of a rigid motion model to estimate the motion and extract it from the measured data. Adapted gridding is performed in the end to receive reconstruction images without motion artifacts.

Anita Möller, Marco Maass, Tim J. Parbs, Alfred Mertins

Maximum Likelihood Estimation of Head Motion Using Epipolar Consistency

Open gantry C-arm systems that are placed within the interventional room enable 3-D imaging and guidance for stroke therapy without patient transfer. This can profit in drastically reduced time-totherapy, however, due to the interventional setting, the data acquisition is comparatively slow. Thus, involuntary patient motion needs to be estimated and compensated to achieve high image quality. Patient motion results in a misalignment of the geometry and the acquired image data. Consistency measures can be used to restore the correct mapping to compensate the motion. They describe constraints on an idealized imaging process which makes them also sensitive to beam hardening, scatter, truncation or overexposure. We propose a probabilistic approach based on the Student’s t-distribution to model image artifacts that affect the consistency measure without sourcing from motion.

Alexander Preuhs, Nishant Ravikumar, Michael Manhart, Bernhard Stimpel, Elisabeth Hoppe, Christopher Syben, Markus Kowarschik, Andreas Maier

Retrospective Blind MR Image Recovery with Parametrized Motion Models

In this paper, we present an alternating retrospective MRI reconstruction framework based on a parametrized motion model. An image recovery algorithm promoting sparsity is used in tandem with a numeric parameter search to iteratively reconstruct a sharp image. Additionally, we introduce a multiresolution strategy to restrict the numeric complexity. This algorithm is then tested in conjunction with a simple motion model on simulated data and provides robust and fast reconstruction of sharp images from severely corrupted k-spaces.

Tim J. Parbs, Anita Mӧller, Alfred Mertins

Model-Based Motion Artifact Correction in Digital Subtraction Angiography Using Optical-Flow

Digital subtraction angiography is an important method for obtaining an accurate visualization of contrast-enhanced blood vessels. The technique involves the digital subtraction of two X-ray images, one with contrast filled vessels (fill image) and one without (mask image). Unfortunately, artifacts that are introduced due to the subtraction of misaligned mask and fill images may potentially degrade the diagnostic value of an image. The techniques used for correcting such artifacts involve the use of affine image registration techniques for aligning the mask and fill images and image processing techniques for suppressing the artifacts. Although affine registration techniques often yield acceptable results, they may fail when the imaged object undergoes 3D transformations. The techniques used for suppressing artifacts may cause blurring, when a projection image can no longer be corrected using a globally uniform motion model. In this paper, we have introduced an optical-ow based local motion compensation approach, where pixel-wise deformation fields are computed based on an X-ray imaging model. A visual inspection of the results shows a significant improvement in the image quality due to a reduction in the artifacts caused by misregistrations.

Sai Gokul Hariharan, Christian Kaethner, Norbert Strobel, Markus Kowarschik, Julie DiNitto, Rebecca Fahrig, Nassir Navab


Acceleration and Enhancement Techniques for Direct Volume Rendering in Virtual Reality

Further developments of the medical virtual reality application MedicVR were achieved by new approaches to direct volume rendering with the HTC Vive head mounted display. Even though the necessary real-time performance for a smooth interactive experience is accomplished by the shader technologies, the rendered image quality and performance is influenced by several parameters. We propose in this paper multiple technological upgrades to our application including: Lens Matched Shading, interactive volume clipping, semi-adaptive sampling, global illumination in direct volume rendering with shadow rays as well as an optimisation method for shadow rays and multiple light source integration. The quality of the rendered images is increased while keeping impact on performance at minimal levels. The application is currently used in study and planning in the field of dentofacial surgery.

Ingrid Scholl, Alex Bartella, Cem Moluluo, Berat Ertural, Frederic Laing, Sebastian Suder

Efficient Web-Based Review for Automatic Segmentation of Volumetric DICOM Images

Within a clinical image analysis workflow with large data sets of patient images, the assessment, and review of automatically generated segmentation results by medical experts are time constrained. We present a software system able to inspect such quantitative results in a fast and intuitive way, potentially improving the daily repetitive review work of a research radiologist. Combining established standards with modern technologies creates a flexible environment to efficiently evaluate multiple segmentation algorithm outputs based on different metrics and visualizations and report these analysis results back to a clinical system environment. First experiments show that the time to review automatic segmentation results can be decreased by roughly 50% while the determination of the radiologist is enhanced.

Tobias Stein, Jasmin Metzger, Jonas Scherer, Fabian Isensee, Tobias Norajitra, Jens Kleesiek, Klaus Maier-Hein, Marco Nolden

Abstract: Phase-Sensitive Region-of-Interest Computed Tomography

X-ray Phase-Contrast Imaging (PCI) is a novel imaging technique that can be implemented with an grating interferometer. PCI is compatible with clinical X-ray equipment, and yields in addition to an absorption image also a differential phase image and a dark- field image. Computed Tomography (CT) of the differential phase can in principle provide high-resolution soft-tissue contrast.

Lina Felsner, Martin Berger, Sebastian Kaeppler, Johannes Bopp, Veronika Ludwig, Thomas Weber, Georg Pelzer, Thilo Michel, Andreas Maier, Gisela Anton, Christian Riess

Joint Multiresolution and Background Detection Reconstruction for Magnetic Particle Imaging

Magnetic particle imaging is a tracer-based medical imaging technology that is quite promising for the task of imaging vessel structures or blood flows. From this possible application it can be deduced that significant areas of the image domain are related to background, because the tracer material is only inside the vessels and not in the surrounding tissue. From this fact alone it seems promising to detect the background of the image in early stages of the reconstruction process. This paper proposes a multiresolution and segmentation based reconstruction, where the background is detected on a coarse level of the reconstruction with only few degrees of freedom by a Gaussian-mixture model and transferred to finer reconstruction levels.

Christine Droigk, Marco Maass, Corbinian Englisch, Alfred Mertins

Abstract: Double Your Views: Exploiting Symmetry in Transmission Imaging

For a plane symmetric object we can find two views - mirrored at the plane of symmetry - that will yield the exact same image of that object. In consequence, having one image of a plane symmetric object and a calibrated camera, we can automatically have a second, virtual image of that object if the 3D location of the symmetry plane is known. In this work, we show for the first time that the above concept naturally extends to transmission imaging and present an algorithm to estimate the 3D symmetry plane from a set of projection domain images based on Grangeat’s theorem.

Alexander Preuhs, Andreas Maier, Michael Manhart, Javad Fotouhi, Nassir Navab, Mathias Unberath

3D-Reconstruction of Stiff Wires from a Single Monoplane X-Ray Image

of preoperative data with intraoperative fluoroscopic images has been shown to reduce contrast agent, radiation dose and procedure time during endovascular repair of aortic aneurysms. However, the quality of the fusion may deteriorate due to often severe deformations of the vasculature caused by instruments such as stiff wires. To adapt the preoperative information intraoperatively to these deformations, the 3D positions of the inserted instruments are required. In this work, we propose a reconstruction method for stiff wires that requires only a single monoplane acquisition, keeping the impact on the clinical workflow to a minimum. To this end, the wire is segmented in the available X-ray image. To allow for a reconstruction in 3D, we then estimate a virtual second view of the wire orthogonal to the real projection based on vessel centerlines from a preoperative computed tomography. Using the real and estimated wire positions, we reconstruct the catheter using epipolar geometry. We achieve a mean modified Hausdorff distance of 4.1mm between the 3D reconstruction and the true wire course

Katharina Breininger, Moritz Hanika, Mareike Weule, Markus Kowarschik, Marcus Pfister, Andreas Maier

Regularized Landmark Detection with CAEs for Human Pose Estimation in the Operating Room

Robust estimation of the human pose is a critical requirement for the development of context aware assistance and monitoring systems in clinical settings. Environments like operating rooms or intensive care units pose different visual challenges for the problem of human pose estimation such as frequent occlusions, clutter and difficult lighting conditions. Moreover, privacy concerns play a major role in health care applications and make it necessary to use unidentifiable data, e.g. blurred RGB images or depth frames. Since, for this reason, the data basis is much smaller than for human pose estimation in common scenarios, pose priors could be beneficial for regularization to train robust estimation models. In this work, we investigate to what extent existing pose estimation methods are suitable for the challenges of clinical environments and propose a CAE based regularization method to correct estimated poses that are anatomically implausible. We show that our models trained solely on depth images reach similar results on the MVOR dataset [1] as RGB based pose estimators while intrinsically being non-identifiable. In further experiments we prove that our CAE regularization can cope with several pose perturbations, e.g. missing parts or left-right flips of joints.

Lasse Hansen, Jasper Diesel, Mattias P. Heinrich

Abstract: Does Bone Suppression and Lung Detection Improve Chest Disease Classification?

Chest radiography is the most common clinical examination type. To improve the quality of patient care and to reduce workload, researchers started developing methods for automatic pathology classification. In our paper [1], we investigate the effect of advanced image processing techniques – initially developed to support radiologists – on the performance of deep learning techniques.

Ivo M. Baltruschat, Leonhard A. Steinmeister, Harald Ittrich, Gerhard Adam, Hannes Nickisch, Axel Saalbach, Jens von Berg, Michael Grass, Tobias Knopp

Towards Automated Reporting and Visualization of Lymph Node Metastases of Lung Cancer

For lung cancer staging, the involvement of lymph nodes in the mediastinum, meaning along the trachea and bronchi, has to be assessed. Depending on the staging results, treatment options include radiation therapy, chemotherapy, or lymph node resection. We present a processing pipeline to automatically generate visualization-supported case reports to simplify reporting and to improve interdisciplinary communication, e. g. between nuclear medicine physicians, radiologists, radiation oncologists, and thoracic surgeons. To evaluate our method, we obtained detailed feedback from the local division of nuclear medicine: Although patient-specific anatomy was not yet considered, the presented approach was deemed to be highly useful from a clinical perspective.

Nico Merten, Philipp Genseke, Bernhard Preim, Michael C. Kreissl, Sylvia Saalfeld

Workflow Phase Detection in Fluoroscopic Images Using Convolutional Neural Networks

In image guided interventions, the radiation dose to the patient and personnel can be reduced by positioning the blades of a collimator to block off unnecessary X-rays and restrict the irradiated area to a region of interest. In a certain stage of the operation workflow phase detection can define objects of interest to enable automatic collimation. Workflow phase detection can be beneficial for clinical time management or operating rooms of the future. In this work, we propose a learning-based approach for an automatic classification of three surgical workflow phases. Our data consists of 24 congenital cardiac interventions with a total of 2985 fluoroscopic 2D X-ray images. We compare two different convolutional neural network architectures and investigate their performance regarding each phase. Using a residual network, a class-wise averaged accuracy of 86:14% was achieved. The predictions of the trained models can then be used for context specific collimation.

Nikolaus Arbogast, Tanja Kurzendorfer, Katharina Breininger, Peter Mountney, Daniel Toth, Srinivas A. Narayan, Andreas Maier

Abstract: Interpretable Explanations of Black Box Classifiers Applied on Medical Images by Meaningful Perturbations Using Variational Autoencoders

The growing popularity of black box machine learning methods for medical image analysis makes their interpretability to a crucial task. To make a system, e.g. a trained neural network, trustworthy for a clinician, it needs to be able to explain its decisions and predictions. In our work we tackle the problem of explaining the predictions of medical image classifiers, trained to differentiate between different types of pathologies and healthy tissue [1].

Hristina Uzunova, Jan Ehrhardt, Timo Kepp, Heinz Handels

Abstract: Deep Transfer Learning for Aortic Root Dilation Identification in 3D Ultrasound Images

Valve-sparing aortic root reconstruction presents an alternative to valve replacement. However, choosing the optimal prosthesis size for the individual patient is a critical task during surgery. To assist the surgeons in their decision making, a pre-operative surgery planning tool based on 3D ultrasound data has been proposed.

Jannis Hagenah, Mattias Heinrich, Floris Ernst

Abstract: Leveraging Web Data for Skin Lesion Classification

The success of deep learning is mainly based on the assumption that for the given application, there is access to a large amount of annotated data. In medical imaging applications, having access to a big-well-annotated data-set is restrictive, time-consuming and costly to obtain. Although diverse techniques as data augmentation can be leveraged to increase the size and variability within the data-set, the representativeness of the training set is still limited by the number of available samples.

Fernando Navarro, Sailesh Conjeti, Federico Tombari, Nassir Navab

Machbarkeitsstudie zur CNN-basierten Identifikation und TICI-Klassifizierung zerebraler ischämischer Infarkte in DSA-Daten

Ziel der vorliegenden Machbarkeitsstudie ist es, zu prüfen, ob eine bildbasierte TICI-Klassifikation von ischämischen Infarkten mittels aktueller Machine Learning-Methoden automatisiert werden kann. Der TICI-Score (Thrombolysis in Cerebral Infarction) beschreibt den lokalen Befund am Infarktort und nachgeschaltete Hirndurchblutung nach endovaskulärer Behandlung. Die zugrunde liegenden Bilddaten sind (2D+t)-Bildserien aus zwei orthogonalen Ansichten (lateral und anterior-posterior), die mittels digitaler Subtraktionsangiographie (DSA) aufgenommen wurden. Basierend auf 698 Bildsequenzen wurde untersucht, inwieweit mittels CNN (Convolutional Neural Network) anhand von entweder aus den Zeitserien abgeleiteten Minimum Intensity Projection-Daten oder unter expliziter Berücksichtigung der Zeitserieninformation eine korrekte Klassifikation erfolgt. Im Zuge dessen wurden im Hinblick auf die zu erwartende Komplexität verschiedene Konfigurationen/Kombinationen von Verschlussort und TICI-Score definiert und analysiert. Die Ergebnisse zeigen, dass es möglich ist, TICI-Score und Verschlussort von ischämischen Infarkten zumindest bei stark unterschiedlichen TICI-Scores verlässlich automatisiert zu bestimmen; die Machbarkeit wird belegt.

Maximilian Nielsen, Moritz Waldmann, Andreas Frölich, Jens Fiehler, René Werner

Image-Based Detection of MRI Hardware Failures

Currently in Magnetic Resonance Imaging (MRI) systems, most hardware failures are only detected after a component has stopped functioning properly. In many cases, this results in a downtime of the system. Moreover, sometimes defective parts are not identified correctly, which may result in more parts than necessary being replaced, causing extra costs. Often in MRI systems, hardware related problems have an impact on image quality. Given an imaging protocol and a well-functioning MRI system, certain image quality metrics have a normal range in a given patient population. Thus, such metrics will present a measurable behavior change in case of a hardware problem. We identified such simple and powerful metrics for signal-to-noise ratio, noise variance and symmetry in images for hardware failures related to Shimming and Local RF coils in this work. To be able to calculate these metrics with every MRI image during the clinical workflow, another constraint is the computation time. With the performance of quality metrics on machine learning algorithms and computation time, we are able to identify the failing MRI components with an accuracy of up to 0.96 AUROC.

Bhavya Jain, Nadine Kuhnert, André deOliveira, Andreas Maier

Detection of Unseen Low-Contrast Signals Using Classic and Novel Model Observers

Automatic task-based image quality assessment has been of importance in various clinical and research applications. In this paper, we propose a neural network model observer, a novel concept which has recently been investigated. It is trained and tested on simulated images with different contrast levels, with the aim of trying to distinguish images based on their quality/contrast. Our model shows promising properties that its output is sensitive to image contrast, and generalizes well to unseen low-contrast signals. We also compare the results of the proposed approach with those of a channelized hotelling observer (CHO), on the same simulated dataset.

Yiling Xu, Frank Schebesch, Nishant Ravikumar, Andreas Maier

Abstract: Imitating Human Soft Tissue with Dual-Daterial 3D Printing

Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand.

Johannes Maier, Maximilian Weiherer, Michaela Huber, Christoph Palm

A Mixed Reality Simulation for Robotic Systems

In interventional angiography, kinematic simulation of robotic system prototypes in early development phases facilitates the detection of design errors. In this work, a game engine visualization with output is developed for such a robotic simulation. The goal of this is a better perception of the prototype by more realistic visualization. The achieved realism is evaluated in a user study. Additionally, the inclusion of real rooms’ walls into the simulation’s collision model is tested and evaluated, to verify smartglasses as a tool for interactive room planning. The walls are reconstructed from point clouds using a mean shift segmentation and RANSAC. Afterwards, the obtained wall estimates are ordered using a simple neighborhood graph.

Martin Leipert, Jenny Sadowski, Michèle Kießling, Emeric Kwemou Ngandeu, Andreas Maier

Image Quality Assessments

Deep learning with Convolutional Neural Networks (CNN) requires large number of training and test data sets which involves usually time-consuming visual inspection of medical image data. Recently, crowdsourcing methods have been proposed to gain such large training sets from untrained observers. In this paper, we propose to establish a lightweight method within the daily routine of radiologists in order to collect simple image quality annotations on a large scale. In multiple diagnostic centres, we analyse the acceptance rate of the radiologists and whether a substantial total number of professional annotations can be acquired to be used for deep learning later. Using a simple control panel with three buttons, 6 radiologists in 5 imaging centres assessed the image quality within their daily routine. Altogether, 1527 DICOM image studies (MR, CT, and X-ray) have been subjectively assessed in the first 70 days which demonstrates that a considerable number of training data sets can be collected with such a method in short time. The acceptance rate of the radiologists indicates that more data sets could be acquired if corresponding incentives are introduced as discussed in the paper. Since the proposed method is incorporated in the daily routine of radiologists, it can be easily scaled to even more number of professional observers.

Medha Juneja, Mechthild Bode-Hofmann, Khay Sun Haong, Steffen Meißner, Viola Merkel, Johannes Vogt, Nobert Wilke, Anja Wolff, Thomas Hartkens

Abstract: HoloLens Streaming of 3D Data from Ultrasound Systems to Augmented Reality Glasses

Two-dimensional ultrasound (US) imaging is one of the most common tools for diagnostic procedures. However, this imaging modality requires highly experienced and skilled operators to mentally reconstruct three-dimensional (3D) anatomy from these images. Additionally, the physician’s gaze is focused on the screen of the US system instead of the probe and patient. In order to overcome these problems, we propose real-time 3D US in combination with augmented reality (AR) glasses (specifically Microsoft HoloLens) to render the volume relative to the US probe [1].

Felix von Haxthausen, Floris Ernst, Ralf Bruder, Mark Kaschwich, Verónica García-Vázquez

Open-Source Tracked Ultrasound with Anser Electromagnetic Tracking

Image-guided interventions (IGT) have shown a huge potential to improve medical procedures or even allow for new treatment options. Most ultrasound(US)-based IGT systems use electromagnetic (EM) tracking for localizing US probes and instruments. However, EM tracking is not always reliable in clinical settings because the EM field can be disturbed by medical equipment. So far, most researchers used and studied commercial EM trackers with their IGT systems which in turn limited the possibilities to customize the trackers in order minimize distortions and make the systems robust for clinical use. In light of current good scientific practice initiatives that increasingly request research to publish the source code corresponding to a paper, the aim of this work was to test the feasibility of using the open-source EM tracker (Anser EMT) for localizing US probes in a clinical US suite for the first time. The standardized protocol of Hummel et al. yielded a jitter of 0.1 ± 0.1mm and a position error of 1.1 ± 0.7mm, which is comparable to 0.1 mm and 1.0 mm of a commercial NDI Aurora system. The rotation error of Anser EMT was 0.15 ± 0.16º, which is lower than at least 0:4 ºfor the commercial tracker. We consider tracked US as feasible with Anser EMT if an accuracy of 1–2 mm is sufficient for a specific application.

Alfred Michael Franz, Herman Alexander Jaeger, Alexander Seitel, Pádraig Cantillon-Murphy, Lena Maier-Hein

Navigierte Interventionen im Kopf- und Halsbereich

Standardisiertes Assessment eines neuen, handlichen Feldgenerators

Elektromagnetische (EM) Trackingsysteme verwenden zur von OP-Instrumenten am Eingriffsort ein EM Feld, das von einem Feldgenerator (FG) erzeugt wird. Üblicherweise sind die FG umso grӧßer, je hӧher die Reichweite ihres Trackingvolumens ist. Der kürzlich von der Firma NDI (Northern Digital Inc., Waterloo, ON, Canada) vorgestellte Planar 10-11 FG vereint erstmals eine kompakte Bauweise und ein dazu verhältnismäßig großes Trackingvolumen. Mit einem standardisierten Messprotokoll wurde der FG auf seine Robustheit gegenüber externen Stӧrquellen und seine Genauigkeit geprüft. Die mittlere Positionsgenauigkeit beträgt 0,59 mm(Standard-Setup) bei einem mittleren Jitter von 0,26 mm. Der mittlere Orientierungsfehler fällt mit 0,10º sehr gering aus. Der hӧchste durch ein Metall verursachte Positionsfehler (4,82 mm) wird von Stahl SST 303 hervorgerufen. Bei Stahl SST 416 ist der Positionsfehler (0,10 mm) am geringsten. Im Vergleich zu zwei anderen FG von NDI erreicht der Planar 10-11 FG tendenziell bessere Genauigkeitsergebnisse. Wegen seiner Kompaktheit und der damit verbundenen mobilen Einsatzfähigkeit kӧnnte der FG daher dazu beitragen, den Gebrauch von EM Trackingsystemen in der Klinik zu steigern.

Benjamin J. Mittmann, Alexander Seitel, Lena Maier-Hein, Alfred M. Franz

Abstract: Multispectral Imaging Enables Visualization of Spreading Depolarizations in Gyrencephalic Brain

Spreading Depolarization (SD) is a phenomenon in the brain related to the abrupt depolarization of neurons in gray matter which results from a break-down of ion gradients across the neuron membrane and propagates like a wave of ischemia. While modulating the hemodynamic response of the SDs is a therapeutic target, the lack of imaging methods that allow for monitoring SDs with high spatiotemporal resolution hinder progress in the field. In this work, we address this bottleneck with a new method for brain imaging based on multispectral imaging (MSI).

Leonardo Ayala, SJ Wirkert, MA Herrera, Adrián Hernández-Aguilera, AS Vermuri, E Santos, L Maier-Hein

Combining Ultrasound and X-Ray Imaging for Mammography

A Prototype Design

This study aims at the combination of 3D breast ultrasound and 2D mammography images to improve the accuracy of diagnosis of breast cancer. It was shown that ultrasound breast imaging has advantages for differentiating cysts and solid masses which are not visible in an X-ray image. Moreover, the specificity in X-ray imaging decreases with an increasing breast thickness, so that ultrasound is usually used as an adjunct to X-ray breast imaging. A fully automatic system to obtain both 2D mammography and 3D ultrasound images is used. The alignment of a 2D mammography image in the cone-beam coordinate system and 3D ultrasound image in a Cartesian coordinate system is the essential task in this study. We have shown that deviations up to 23 mm caused by the cone-beam system can be calculated and corrected utilizing the geometry information of the hardware. The multimodal image reading tool is presented in a GUI for clinical diagnosis. The presented setup might lead to a distinct improvement in effciency and add diagnostic value to the acquisition.

Qiuting Li, Christoph Luckner, Madeleine Hertel, Marcus Radicke, Andreas Maier

Towards In-Vivo X-Ray Nanoscopy

Acquisition Parameters vs. Image Quality

X-ray microscopy is a powerful imaging technique that permits the investigation of specimen on nanoscale with resolution of up to 700 nm in 3-D. In the context of bio-medical research this is a promising technology that allows to study the microstructure of biological tissues. However, X-ray microscopy (XRM) systems are not designed for in-vivo applications and are mainly used in the field of material sciences in which dose is irrelevant. High resolution scans may take up to 10 hours. Our long-term goal is to utilize this modality to study the effects of disease dynamics and treatment in-vivo on mice bones. Therefore, a first step towards this ambitious goal is to evaluate the current state-of-the-art to determine the required system parameters. In this work, we investigate the impact of different XRM settings on the image quality. By changing various acquisition parameters such as exposure time, voltage, current and number of projections, we simulate the outcome of XRM scans, while reducing the X-ray energy. We base our simulations on a high resolution ex-vivo scan of a mouse tibia. The resulting reconstructions are evaluated qualitatively as well as quantitatively by calculating the contrast-to-noise ratio (CNR). We demonstrate that we can reach comparable image quality while reducing the total X-ray energy which forms a foundation towards the upcoming experiments.

Leonid Mill, Lasse Kling, Anika Grüneboom, Georg Schett, Silke Christiansen, Andreas Maier

Abstract: Beamforming Sub-Sampled Raw Ultrasound Data with DeepFormer

Converting reflected sonic signals to an ultrasound image, beaforming, has been traditionally formulated mathematically via the simple process of delay and sum (DAS). Recent research has aimed to improve ultrasound beamforming via advanced mathematical models for increased contrast, resolution and speckle filtering. These formulations, such as minimum variance, add minor improvement over the current real-time, state-of-the-art DAS, while requiring drastically increased computational time and therefore excluding them from wide-spread adoption.

Walter Simson, Magdalini Paschali, Guillaume Zahnd, Nassir Navab

Shape Sensing with Fiber Bragg Grating Sensors

A Realistic Model of Curvature Interpolation for Shape Reconstruction

Fiber optical sensors such as Fiber Bragg Grating (FBG) are more and more used for shape sensing of medical instruments. Estimating the shape via measured wavelengths is difficult and underlies a long pipeline of calculations with many different sources of errors. In this work we introduce a novel approach for more realistic interpolation of curvature used in subsequently applied reconstruction algorithms. We demonstrate and compare our method to others based on simulation of different types of shapes. Furthermore, we evaluated our approach in a real world experiment with measured FBGs data.

Sonja Jäckle, Jan Strehlow, Stefan Heldmann

On the Characteristics of Helical 3D X-Ray Dark-Field Imaging

The X-ray dark-field can be measured with a grating interferometer. For oriented structures like fibers, the signal magnitude depends on the relative orientation between fiber and gratings. This allows to analytically reconstruct the fiber orientations at a micrometer scale. However, there currently exists no implementation of a clinically feasible trajectory for recovering the full 3D orientation of a fiber. In principle, a helical trajectory can be suitable for this task. However, as a first step towards dark-field imaging in a helix, a careful analysis of the signal formation is required. Towards this goal, we study in this paper the impact of the grating orientation. We use a recently proposed 3D-projection model and show that the projected dark-field scattering at a single volume point depends on the grating sensitivity direction and the helix geometry. More specifically, the dark-field signal on a 3D trajectory always consists of a linear combination of a constant and an angular-dependent component.

Lina Felsner, Shiyang Hu, Veronika Ludwig, Gisela Anton, Andreas Maier, Christian Riess

Effects of Tissue Material Properties on X-Ray Image, Scatter and Patient Dose A Monte Carlo Simulation

With increasing patient and staff X-ray radiation awareness, many efforts have been made to develop accurate patient dose estimation methods. To date, Monte Carlo (MC) simulations are considered golden standard to simulate the interaction of X-ray radiation with matter. However, sensitivity of MC simulation results to variations in the experimental or clinical setup of image guided interventional procedures are only limited studied. In particular, the impact of patient material compositions is poorly investigated. This is mainly due to the fact, that these methods are commonly validated in phantom studies utilizing a single anthropomorphic phantom. In this study, we therefore investigate the impact of patient material parameters mapping on the outcome of MC X-ray dose simulations. A computation phantom geometry is constructed and three different commonly used material composition mappings are applied. We used the MC toolkit Geant4 to simulate X-ray radiation in an interventional setup and compared the differences in dose deposition, scatter distributions and resulting X-ray images. The evaluation shows a discrepancy between different material composition mapping up to 20% concerning directly irradiated organs. These results highlight the need for standardization of material composition mapping for MC simulations in a clinical setup.

Philipp Roser, Annette Birkhold, Xia Zhong, Elizaveta Stepina, Markus Kowarschik, Rebecca Fahrig, Andreas Maier

Isocenter Determination from Projection Matrices of a C-Arm CBCT

An accurate position of the isocenter of a cone-beam CT trajectory is mandatory for accurate image reconstruction. For analytical backprojection algorithms, it is assumed that the X-ray source moves on a perfectly circular trajectory, which is not true for most practical clinical trajectories due to mechanical instabilities. Besides, the exibility of novel robotic C-arm systems enables new trajectories where the computation of the isocenter might not be straight forward. An inaccurate isocenter position directly affects the computation of the redundancy weights and consequently affects the reconstructions immediately. In this work, we compare different methods for computing the isocenter of a non-ideal circular scan trajectory and evaluate their robustness in the presence of noise. The best results were achieved using a method based on a least-square-based fit. Furthermore, we show that an inaccurate isocenter computation can lead to artifacts in the reconstruction result. Therefore, this work highlights the importance of an accurate isocenter computation with the background of novel upcoming clinical trajectories.

Ahmed Amri, Bastian Bier, Jennifer Maier, Andreas Maier

Improving Surgical Training Phantoms by Hyperrealism: Deep Unpaired Image-to-Image Translation from Real Surgeries

Current ‘dry lab’ surgical phantom simulators are a valuable tool for surgeons which allows them to improve their dexterity and skill with surgical instruments. These phantoms mimic the haptic and shape of organs of interest, but lack a realistic visual appearance. In this work, we present an innovative application in which representations learned from real intraoperative endoscopic sequences are transferred to a surgical phantom scenario.

Sandy Engelhardt, Raffaele De Simone, Peter M. Full, Matthias Karck, Ivo Wolf

Evaluation of Spatial Perception in Virtual Reality within a Medical Context

This paper compares three different visualization techniques to improve spatial perception in virtual reality applications. In most virtual reality applications, spatial relations cannot be sufficiently estimated to make precise statements about the locations and positions of objects. Especially in the field of medical applications, it is crucial to correctly perceive the depth and structure of a given object. Thus, visualization techniques need to be developed to support the spatial perception. To address this, we carried out a user study to evaluate different visualization techniques and deal with the question of how glyphs influence spatial perception in a virtual reality application. Therefore, our evaluation compares arrow glyphs, heatmaps with isolines and pseudo-chromadepth in terms of improving the spatial perception within virtual reality. Based on the study results it can be concluded that spatial perception can be improved with the help of glyphs, which should motivate further research in this area.

Jan N. Hombeck, Nils Lichtenberg, Kai Lawonn

Simulation von Radiofrequenzablationen für die Leberpunktion in 4D-VR-Simulationen

Radio-Frequenz-Ablationen spielen eine wichtige Rolle in der Therapie von malignen Leberherden. Die Navigation einer Nadel zur Läsion stellt eine Herausforderung für den auszubildenden und auch für den intervenierenden Arzt dar. Daher ist es wünschenswert, Trainings- und Planungssysteme basierend auf medizinischen Bilddaten und Methoden der visuo-haptischen Virtual-Reality-Simulation anzubieten. In diesem Papier wird eine Methode zur Simulation von Ablationen an der Nadelspitze für einen bestehenden VR-Simulator nach erfolgreicher Nadelnavigation zum Läsionsherd vorgestellt. Ein verbessertes Modell wurde echtzeitfähig (CUDA) umgesetzt, evaluiert und erreicht hochperformant robustere und sicherere Planungsergebnisse als die Literatur.

Niclas Kath, Heinz Handels, Andre Mastmeyer

Abstract: An SVR-Based Data-Driven Leaflet Modeling Approach for Personalized Aortic Valve Prosthesis Development

While the aortic valve geometry is highly patient-specific and studies indicate its high influence on the circulation, state-of-the-art valve prostheses are not aiming at reproducing this individual geometry. One challenge in manufacturing personalized prostheses is the imaging of the thin leaflets in their curved 3D shape as well as the mapping from this shape to the planar 2D leaflet shape that is cut out of the fabrication material. Even in the gold standard imaging modality (transesophageal ultrasound), the leaflets are barely visible.

Jannis Hagenah, Tizian Evers, Michael Scharfschwerdt, Achim Schweikard, Floris Ernst

Mitral Valve Quantification at a Glance

Flattening Patient-Specific Valve Geometry

Malfunctioning mitral valves can be restored through complex surgical interventions, which greatly benefit from intensive planning and pre-operative analysis from echocardiography. Visualization techniques provide a possibility to enhance such preparation processes and can also facilitate post-operative evaluation. In this work we extend current research in this field, building upon patient-specific mitral valve segmentations that are represented as triangulated 3D surface models. We propose a 2D-map construction of these models, which can provide physicians with a view of the whole surface at once. This allows assessment of the valve’s area and shape without the need for different viewing angles and scene interaction. Clinically highly relevant pathology indicators, such as coaptation zone areas or prolapsed regions are color coded on these maps, making it easier to fully comprehend the underlying pathology. Quality and effectiveness of the proposed methods were evaluated through a user survey conducted with domain experts.We assessed pathology detection accuracy using 3D valve models in comparison to the developed method. Classification accuracy increased by 2.8% across all tested valves and by 10.4% for prolapsed valves.

Pepe Eulzer, Nils Lichtenberg, Rawa Arif, Andreas Brcic, Matthias Karck, Kai Lawonn, Raffaele De Simone, Sandy Engelhardt

Fully-Deformable 3D Image Registration in Two Seconds

We present a highly parallel method for accurate and efficient variational deformable 3D image registration on a consumer-grade graphics processing unit (GPU). We build on recent matrix-free variational approaches and specialize the concepts to the massively-parallel manycore architecture provided by the GPU. Compared to a parallel and optimized CPU implementation, this allows us to achieve an average speedup of 32:53 on 986 real-world CT thorax-abdomen follow-up scans. At a resolution of approximately 2563 voxels, the average runtime is 1:99 seconds for the full registration. On the publicly available DIR-lab benchmark, our method ranks third with respect to average landmark error at an average runtime of 0:32 seconds.

Daniel Budelmann, Lars König, Nils Papenberg, Jan Lellmann

Abstract: Landmark-Free Initialization of Multi-Modal Image Registration

To achieve convergence, nonlinear deformable image registration tasks of partial-view 3D ultrasound and MRI, as often seen in US guided interventions or retrospective studies thereof, need to be initialized. In clinical practice corresponding 3D landmarks are selected in both images. Performing this depends on the geometrical understanding of the targeted anatomy and the modality-specific appearance and is thus prone to error.

Julia Rackerseder, Maximilian Baust, Rüdiger Göbl, Nassir Navab, Christoph Hennersperger

Enhancing Label-Driven Deep Deformable Image Registration with Local Distance Metrics for State-of-the-Art Cardiac Motion Tracking

While deep learning has achieved significant advances in accuracy for medical image segmentation, its benefits for deformable image registration have so far remained limited to reduced computation times. Previous work has either focused on replacing the iterative optimization of distance and smoothness terms with CNN-layers or using supervised approaches driven by labels. Our method is the first to combine the complementary strengths of global semantic information (represented by segmentation labels) and local distance metrics that help align surrounding structures. We demonstrate significant higher Dice scores (of 86.5 %) for deformable cardiac image registration compared to classic registration (79.0 %) as well as label-driven deep learning frameworks (83.4%).

Alessa Hering, Sven Kuckertz, Stefan Heldmann, Mattias P. Heinrich

Respiratory Deformation Estimation in X-Ray-Guided IMRT Using a Bilinear Model

Driving a respiratory motion model in X-ray guided radiotherapy can be challenging in treatments with continuous rotation such as VMAT, as data-driven respiratory signal extraction suffers from angular effects overlapping with respiratory changes in the projection images. Compared to a linear model trained on static acquisition angles, the bilinear model gains flexibility in terms of handling multiple viewpoints at the cost of accuracy. In this paper, we evaluate both models in the context of serving as the surrogate input to a motion model. Evaluation is performed on the 20 patient 4D CTs in a leave-one-phase-out approach yielding a median accuracy drop of only 0:14mm in the 3D error of estimated vector fields of the bilinear model compared to the linear one.

Tobias Geimer, Stefan B. Ploner, Paul Keall, Christoph Bert, Andreas Maier

Augmented Mitotic Cell Count Using Field of Interest Proposal

Histopathological prognostication of neoplasia including most tumor grading systems are based upon a number of criteria. Probably the most important is the number of mitotic figures which are most commonly determined as the mitotic count (MC), i.e. number of mitotic figures within 10 consecutive high power fields. Often the area with the highest mitotic activity is to be selected for the MC. However, since mitotic activity is not known in advance, an arbitrary choice of this region is considered one important cause for high variability in the prognostication and grading. In this work, we present an algorithmic approach that first calculates a mitotic cell map based upon a deep convolutional network. This map is in a second step used to construct a mitotic activity estimate. Lastly, we select the image segment representing the size of ten high power fields with the overall highest mitotic activity as a region proposal for an expert MC determination. We evaluate the approach using a dataset of 32 completely annotated whole slide images, where 22 were used for training of the network and 10 for test. We find a correlation of r=0.936 in mitotic count estimate.

Marc Aubreville, Christof A. Bertram, Robert Klopfleisch, Andreas Maier

Feasibility of Colon Cancer Detection in Confocal Laser Microscopy Images Using Convolution Neural Networks

Histological evaluation of tissue samples is a typical approach to identify colorectal cancer metastases in the peritoneum. For immediate assessment, reliable and real-time in-vivo imaging would be required. For example, intraoperative confocal laser microscopy has been shown to be suitable for distinguishing organs and also malignant and benign tissue. So far, the analysis is done by human experts. We investigate the feasibility of automatic colon cancer classification from confocal laser microscopy images using deep learning models. We overcome very small dataset sizes through transfer learning with state-of-the-art architectures. We achieve an accuracy of 89:1% for cancer detection in the peritoneum which indicates viability as an intraoperative decision support system.

Nils Gessert, Lukas Wittig, Daniel Drömann, Tobias Keck, Alexander Schlaefer, David B. Ellebrecht

Efficient Construction of Geometric Nerve Fiber Models for Simulation with 3D-PLI

Three-dimensional (3D) polarized light imaging (PLI) is an unique technique used to reconstruct nerve fiber orientations of postmortem brains at ultra-high resolution. To continuously improve the current physical model of 3D-PLI, simulations are powerful methods. Since the creation of simulated data can be time consuming, we developed a tool which enables fast and efficient creation of synthetic fiber data using parametric functions and interpolation methods. Performance tests showed that every component of the program scales linearly with the amount of fiber points while the reconstructed fiber cup phantom and optic chiasm-like crossing fiber models reproduce known effects known from 3D-PLI measurements.

Jan A. Reuter, Felix Matuschke, Nicole Schubert, Markus Axer

Resource-Efficient Nanoparticle Classification Using Frequency Domain Analysis

We present a method for resource-efficient classification of nanoparticles such as viruses in liquid or gas samples by analyzing Surface Plasmon Resonance (SPR) images using frequency domain features. The SPR images are obtained with the Plasmon Assisted Microscopy Of Nano-sized Objects (PAMONO) biosensor, which was developed as a mobile virus and particle detector. Convolutional neural network (CNN) solutions are available for the given task, but since the mobility of the sensor is an important factor, we provide a faster and less resource demanding alternative approach for the use in a small virus detection device. The execution time of our approach, which can be optimized further using low power hardware such as a digital signal processor (DSP), is at least 2:6 times faster than the current CNN solution while sacrificing only 1 to 2.5 percent points in accuracy.

Mikail Yayla, Anas Toma, Jan Eric Lenssen, Victoria Shpacovitch, Kuan-Hsun Chen, Frank Weichert, Jian-Jia Chen

Black-Box Hyperparameter Optimization for Nuclei Segmentation in Prostate Tissue Images

Segmentation of cell nuclei is essential for analyzing highcontent histological screens. Often, parameters of automatic approaches need to be optimized, which is tedious and difficult to perform manually. We propose a novel hyperparameter optimization framework, which formulates optimization as a combination of candidate sampling and an optimization strategy. We present a clustering based and a deep neural network based pipeline for nuclei segmentation, for which the parameters are optimized using state of the art optimizers as well as a novel optimizer. The pipelines were applied to challenging prostate cancer tissue images. We performed a quantitative evaluation using 28,388 parameter settings. It turned out that the deep neural network outperforms the clustering based pipeline, while the results for different optimizers vary slightly.

Thomas Wollmann, Patrick Bernhard, Manuel Gunkel, Delia M. Braun, Jan Meiners, Ronald Simon, Guido Sauter, Holger Erfle, Karsten Rippe, Karl Rohr


Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!