Skip to main content

2016 | Buch

Bildverarbeitung für die Medizin 2016

Algorithmen – Systeme – Anwendungen

herausgegeben von: Thomas Tolxdorff, Thomas M. Deserno, Heinz Handels, Hans-Peter Meinzer

Verlag: Springer Berlin Heidelberg

Buchreihe : Informatik aktuell

insite
SUCHEN

Über dieses Buch

In den letzten Jahren hat sich der Workshop "Bildverarbeitung für dieMedizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2016wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefungder Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. DieBeiträge dieses Bandes - einige davon in englischer Sprache - umfassen alleBereiche der medizinischen Bildverarbeitung, insbesondere Bildgebungund -akquisition, Molekulare Bildgebung, Visualisierung und Animation, Anatomische Atlanten, Patientenindividuelle Simulation und Planung, Biomechanische Modellierung, Bildverarbeitung in der Telemedizin, Bildgestützte Roboter, Chirurgische Simulatoren u.v.m.

Inhaltsverzeichnis

Frontmatter
Significant Advances in Medical Image Analysisorem

In the past 4 years Deep Learning (DL) has re-entered the computer vision scene dramatically, by completely shifting the design paradigm compared to the last 20 years. Whereas before the error rates in image analysis were more or less stagnant, since 2012 DL kept halving them each year, in some recent cases even achieving super-human performance! All typical tasks such classification, detection and segmentation benefited across all related applications such as traffic sign recognition, natural image analysis, automatic captioning. These developments move computer vision from a scientific playground to a productizable technology.

Stefan Bordag
Wie verändert unsere Community die technische Ausstattung im OP

An der Charité– Universitätsmedizin Berlin werden seit 2014 insgesamt 26 neue Operationssäle konzipiert, aufgebaut und eingerichtet. Wikipedia zeigt unter dem Begriff “Operationssaal“ einen davon, den “Robotik-OP“.

Erwin Keeve
Tomoelastography by Multifrequency Wave Number Recovery

In elastography mechanically excited shear waves are captured by medical ultrasound or MRI to reconstruct the elastic parameters of the underlying tissue. Current inversion algorithm use second-order derivatives for elasticity reconstruction which limits the spatial resolution of the elastic parameter maps. Here we propose a noise stable inversion method, which relies on wave number k reconstruction at different harmonic frequencies followed by their amplitude-weighted averaging prior to inversion. The algorithm is tested on abdominal and pelvic data. The resulting shear wave speed maps provide anatomical details in elastic parameter maps due to its inherent sensitivity to noise at pixel-wise resolution producing superior details to current MRE inversion methods.

Heiko Tzschätzsch, Jing Guo, Florian Dittmann, Jürgen Braun, Ingolf Sack
Combined Background Field Removal and Reconstruction for Quantitative Susceptibility Mapping

Quantitative Susceptibility Mapping (QSM) is an emerging Magnetic Resonance Imaging (MRI) technique that provides in-vivo measurements of the magnetic susceptibility of, e.g., brain tissue. In practice, QSM requires solving a series of challenging inverse problem. Here, we will address two important steps, the removal of the background field, which is caused by sources outside the Region of Interest (ROI), and secondly, the reconstruction of dipole sources. In the recent past both problems have received attention, however, despite a large interdependence, each of it has been treated separately. We propose a new method that makes use of synergy effects by combining both steps. We demonstrate with numerical experiments that a combined treatment provides a better reconstruction of dipole sources close to the boundary of the ROI.

Maximilian März, Lars Ruthotto
Identifying Intracortical Partial Voluming Effects Using Cortical Surface Normals in Quantitative MRI T1 Maps Sensitive to Microstructure

Partial voluming is a known problem in medical imaging. It occurs at high contrast edges between different structures. Image segmentation tasks can account for this type of effect to a certain degree, but residual intensity effects in structures located further away from affected edges remain. This paper focuses on partial volume (PV) effects at the interface between gray matter and cerebrospinal fluid in the brain when imaged using high-field magnetic resonance imaging (MRI). Here, we use cortical surface normals for spatially locating PV-induced effects on MRI measures of cortical microstructure. We demonstrate that especially in very narrow sulcal banks of the cortex, PV effects significantly influence cortical MRI intensities, but only up to a certain cortical depth. The method allows to investigate and better understand PV effects in the cortex to identify locations in which the MRI intensities can be fully trusted. The results may prove to be useful for future PV correction methods in brain segmentation. Also, cortical area studies may benefit from better PV estimation in order to yield more precise and confiding results on microstructure and areal extent.

Juliane Dinse, Andreas Schäfer, Pierre-Louis Bazin, Nikolaus Weiskopf
Subpixelgenaue Positionsbestimmung in Magnetic-Particle-Imaging

Das tomographische Bildgebungsverfahren Magnetic- Particle-Imaging (MPI) bietet eine hohe zeitliche Auflösung im unteren Millisekundenbereich. Für die Navigation von markierten Kathetern ist die Ortsauflösung jedoch zu gering. In dieser Arbeit wird gezeigt, dass eine submillimetergenaue Positionsbestimmung möglich ist, obwohl die aufgenommenen Daten eine niedrigere Auflösung aufweisen. Hierzu werden die niedrig aufgelösten MPI-Daten aufbereitet und die Position einer Probeüber den Schwerpunkt der Grauwerte bestimmt. Anhand von Messdaten werden statistische Fehler und systematische Abweichungen der Methode abgeschätzt.

Martin Hofmann, Kevin Bizon, Alexander Schlaefer, Tobias Knopp
Improved Semi-Automatic Basket Catheter Reconstruction from two X-Ray Views

Ablation guided by focal impulse and rotor mapping (FIRM) is an alternative treatment for atrial fibrillation in particular for persistent atrial fibrillation. To this end, a basket catheter comprising 64 electrodes is inserted into both the right atrium and left atrium under X-ray guidance to locate electric anomalies. The 3D positions of the electrodes are needed to determine the ablation region. We propose a improved model-based method for 3D reconstruction of this basket catheter based on two X-ray views. In our experiments, we found that the proposed approach outperformed our previous approach. The median error using the proposed method are 1.5mm for phantom data and 2.6mm for clinical data. We also introduced a novel error metric, the overlap rate between ground truth ablation region and reconstructed ablation region. The overall mean overlap rate under optimized viewing angle conditions is 84 ± 14% for phantom data and 72 ± 16% for clinical data.

Xia Zhong, Matthias Hoffmann, Norbert Strobel, Andreas Maier
A Greedy Completion Algorithm for Retrieving Fuzzy Fine Structures
Application to Aortic Lumina Separation in CTA Data

By this contribution we tackle the challenge of extracting fuzzy, curvilinear fine structures from medical image data. The dissection membrane represents a highly tortuous fine structure within aortas of dissection patients. Due to its variability in topology and morphology, extraction by any assumed shape priors is deemed to fail. Based on the response of 3D/2D phase congruency filter, we select a segment of high significance in order to remove false positives within each CTA slice. Multicriterial, greedy tracking serves for membrane completion, while a inter-slice grouping algorithm performs detection of global outliers. Erroneous slice results are replaced using sampled membrane segments from adjacent slices. Our proposed algorithm not only improves the membrane segmentation by up to 32% when compared to stand-alone usage of local phase features, but also enables separation of the true and false lumen.

Cosmin Adrian Morariu, Stephan Benjamin Huckfeldt, Daniel Sebastian Dohle, Konstantinos Tsagakis, Josef Pauli
Breast Density Assessment Using Wavelet Features on Mammograms

Breast density differs from almost entirely fatty to extremely dense tissue composition. In mammography screenings, physicians are often supported by computer-aided detection and diagnosis systems (CAD) whose detection rate is affected by the density of the breast. An automatic pre-assessment of breast density would enable a specific analysis adapted to each density class. Digital mammograms from the INbreast database [1] are decomposed into Haar-Wavelet components and several levels are used for classification. A random forest classifier is applied on the averaged Wavelet components for four class densities which yields an accuracy of 64.53% in CC-view and 51.22% in MLO-view. The 3-class problem with a combined class of medium densities yields an accuracy of 73.89% in CC-view and 67.80% in MLO-view.

Frank Schebesch, Mathias Unberath, Ingwer Andersen, Andreas Maier
Comparison of Post-Hoc Normalization Approaches for CT-based Lung Emphysema Index Quantification

CT-based lung emphysema quantification is sensitive to the kernel used for image reconstruction. In this paper, we present and evaluate three methods for normalization of CT images reconstructed with different kernels to homogenize image sharpness and pixel noise. For 56 subjects, chest CT images reconstructed with a soft kernel (B20f) and a medium sharp kernel (B50f) were acquired. Normalization of the B50f images was performed using a Laplacian frequency decomposition, an edge-preserving frequency decomposition, and a heuristic filter-based method. To compare the normalization methods, emphysema indices (EIs) were computed from the normalized images and compared to the baseline EIs computed from the B20f images. Further, volume overlaps of the detected emphysema regions were computed. Average differences in EI between kernels decreased for all normalization methods. Laplacian and edge-preserving frequency normalization show a similar agreement with the baseline EI, however, emphysema regions detected after Laplacian frequency normalization show degraded volume overlaps likely caused by haloing artifacts in the normalized images. Overall, edge-preserving frequency decomposition shows the best normalization performance yet high computational demands.

Jan Ehrhardt, Fabian Jacob, Heinz Handels, Alex Frydrychowicz
Nicht-modellbasierte Kalibrierung von Kameras mit Monitoren

Die Verwendung von Kameras als Messmittel für medizinische Anwendungen setzt deren präzise Kalibrierung voraus. Gängige Verfahren modellieren die Abbildungseigenschaften einer Kamera mittels perspektivischer Projektion und parametrisierter Funktionen zur Beschreibung von Linsenverzerrung. In den Randbereichen des Kamerabildes sind diese Modelle oft unzureichend. Außerdem bedingt die Verwendung starrer Kalibriermuster eine in der Regel kleine Anzahl an nicht gleichmäßig verteilten Punktkorrespondenzen zur Bestimmung der Modellparameter. In der vorliegenden Arbeit wird ein vollkommen neues und nicht auf Modellen basierendes Kalibrierverfahren vorgestellt, bei dem jedes Kamerapixel unabhängig von jedem anderen kalibriert wird.

Harald Hoppe, Fabian Seebacher, Martin Klemm
Statistische 3D-Formmodelle mit verteilter Erscheinungsmodellierung
Segmentierung per Abstimmung

Die Segmentierungsfähigkeit 3D Statistischer Formmodelle wird durch eine limitierte, typischerweise unidirektionale Landmarkensuche und durch schwaches Lernen auf lokal begrenzter Bildinformation erheblich eingeschränkt. Wir präsentieren eine Erweiterung 3D Statistischer Formmodelle um eine verteilte Erscheinungsmodellierung auf der Grundlage randomisierter Regressionswälder. Diese schätzen von unterschiedlichen Bildpositionen aus die Lage gesuchter Oberflächenlandmarken und ziehen dabei einen ganzheitlicheren Nutzen aus verteilter Bildinformation. Schließlich werden alle Schätzungen in einem Abstimmungsverfahren zu einer robusten Positionsschätzung gebündelt. Nach erfolgreichen Anwendungen bei der Multi-Organ-Segmentierung aus früheren Arbeiten demonstrieren wir die Segmentierungsfähigkeit der Methode auf besonders herausfordernden Daten. Für diesen Zweck führen wir Segmentierungen des linken Herzventrikels auf 32 transösophagealen 3D-Ultraschalldaten durch. Ergebnisse werden quantitativ in 4-facher Kreuzvalidierung evaluiert und mit veröffentlichten Ergebnissen hochspezialisierter Methoden verglichen. Ohne zusätzliche Parameteroptimierung, Bildvorverarbeitung und Modellinitialisierung werden hierbei Ergebnisse erzielt, die sich im Bereich spezialisierter Methoden bewegen.

Tobias Norajitra, Sandy Engelhardt, Thomas Held, Sameer Al-Maisary, Raffaele de Simone, Hans-Peter Meinzer, Klaus Maier-Hein
Fallspezifisches Lernen zur automatischen Läsionssegmentierung in multimodalen MR-Bildern

Medizinische Bilddaten haben eine hohe Diversität aufgrund von Inter- und Intrascanner-Variabilitäten, Protokoll-Parametern sowie patientenspezifischer Erscheinungsformen der Physiologie und Pathologie. Wenn ein einzelner Klassifikator sämtliche Variationen berücksichtigen soll, wird dies seine Genauigkeit stark beeinträchtigen. Zur Segmentierung von Läsionen auf multimodalen MR-Bilder schlagen wir aus diesem Grund vor, für jedes neue Bild einen eigenen Klassifikator zu trainieren. Dazu wird geschätzt, welche der Trainingsdaten am besten geeignet sind um den gegebenen Fall zu segmentieren. Mit diesen wird dann ad hoc der spezifische Klassifikator trainiert. Wir evaluieren unsere Methode anhand der Daten des international ISLE-Wettbewerbs und zeigen, dass eine deutliche Verbesserung der Segmentierungsqualität erreicht wird.

Michael Götz, Christoph Kolb, Christian Weber, Sebastian Regnery, Klaus H. Maier-Hein
Comparative Evaluation of Interactive Segmentation Approaches

Image segmentation is a key technique in image processing with the goal to extract important objects from the image. This evaluation study focuses on the segmentation quality of three different interactive segmentation techniques, namely Region Growing (RG), Watershed (WS) and the cellular automaton based GrowCut (GC) algorithm. Three different evaluation measures are computed to compare the segmentation quality of each algorithm: Rand Index (RI), Mutual Information (MI), and the Dice Coefficient (D). For the images in the publicly available ground truth data base utilized for the evaluation, the GrowCut method has a slight advantage over the other two. The presented results provide insight into the performance and the characteristics with respect to the image quality of each tested algorithm.

Mario Amrehn, Jens Glasbrenner, Stefan Steidl, Andreas Maier
Robuste intraoperative Registrierung mit fluoreszierenden Markern für die computergestützte Laparoskopie

Laparoskopische Interventionen erfordern die präzise Navigation von chirurgischen Instrumenten unter Berücksichtigung von Risikostrukturen. Obwohl zahlreiche Konzepte zur Einblendung von anatomischen Details auf Basis intraoperativer Registrierungsmethoden existieren, scheitert die klinische Translation bislang an fehlender Robustheit und aufwendiger Integration in den klinischen Arbeitsablauf. In diesem Beitrag präsentieren wir einen neuartigen Ansatz zur robusten intraoperativen Datenfusion basierend auf fluoreszierenden Markern. In einer in vitro Pilotstudie zeigen wir, dass sich die neuen Marker im Gegensatz zu herkömmlichen Nadelmarkern auch in der Gegenwart von Rauch, Blut oder Gewebestücken im Sichtfeld der laparoskopischen Kamera lokalisieren und tracken lassen. So wird eine robuste Registrierung von 3D-Bilddaten mit der aktuellen Patientenanatomie ermöglicht. Durch die einfache Integrierbarkeit in den medizinischen Arbeitsablauf ist das Potential des neuen Ansatzes hoch.

Esther Wild, Dogu Teber, Daniel Schmid, Tobias Simpfendörfer, Michael Müller, Hannes Kenngott, Lena Maier-Hein
Curve-to-Image Based Non-Rigid Registration of Digital Photos and Quantitative Light-Induced Fluorescence Images in Dentistry

Decalcification is an undesirable effect that can arise during orthodontic treatment. In digital photographs, it appears as white spot lesions, i.e. white spots on the tooth surface. To asses the extent of demineralization in a tooth, quantitative light-induced fluorescence (QLF) is used. We propose a method to match digital photographs and QLF images of decalcified teeth, based on the idea of curve-to-image matching. It extracts a curve representing the shape of the tooth from the QLF image and aligns it to the photo. The registration problem is formulated as minimization problem where the objective functional consists of a data term and a higher order, linear elastic prior for the deformation. The data term is constructed using the signed distance function of the tooth region shown in the photo, which is determined in a pre-processing step by classifying the photo into tooth and non-tooth regions. The resulting minimization problem is reformulated as a nonlinear least-square problem and solved numerically using Gauss-Newton. The evaluation is based on 150 image pairs captured from 32 patients. The correctness of the matching is confirmed by visual inspection of dental experts and the alignment improvement quantified using mutual information. The curve-to-image matching idea can be extended to surface-to-voxel tasks.

Benjamin Berkels, Thomas M. Deserno, Eva E. Ehrlich, Ulrike B. Fritz, Ekaterina Sirazitdinova, Rosalia Tatano
Registration of Atrium Models to C-arm X-ray Images Based on Devices Inside the Coronary Sinus and the Esophagus

For augmented fluoroscopy in the context of minimally invasive EP procedures, a patient-specific model of the atrium segmented from a 3D volume can be overlaid on fluoroscopic images. This requires a registration between the 3D model coordinate system and the coordinate system of the C-arm X-ray device. We propose an indirect registration that makes use of surrounding anatomical structures that can be segmented both in the 3D volume obtained by CT or MRI and also in fluoroscopic images. More precisely, the coronary sinus and the esophagus segmented from the 3D volume is registered to reconstructed 3D devices which are located in the respective structures during the intervention. An evaluation on 6 images of 6 different patients yielded a mean registration error of 3.2mm. Results became significantly worse if only one of the anatomical structures was used.

Matthias Hoffmann, Norbert Strobel, Andreas Maier
Reduction of Metal Artifacts Using a New Segmentation Approach
Extension of Graph Cuts for a More Precise Segmentation Used in Metal Artifact Reduction

Metal artifact reduction (MAR) is crucial for the diagnostic value, as metal artifacts tremendously impair the image quality of a CT scan. Existing techniques are time-consuming. Most MAR methods contain a metal object segmentation step and the resulting image quality highly depends on the validity of the segmentation. However, segmenting the metal parts correctly still poses a non-trivial problem. We present a novel approach of an automatic, object independent segmentation which starts with the state-of-the-art segmentation. This is improved by applying graph cut onto every projection. We extend the graph cut idea by more information and apply knowledge about the distance, a classification probability and a bias to the edges as well as a similarity measure of pixels to their direct neighbors. By additionally considering global consistency, we receive a more precise segmentation result. For the evaluation, our new segmentation approach was combined with the frequency split MAR (FSMAR). The resulting CT images yielded higher image quality compared with the standard threshold-based FSMAR.

Nadine Kuhnert, Nicole Maass, Karl Barth, Andreas Maier
Basic Statistics of SIFT Features for Texture Analysis

This paper presents an evaluation of two methods using Scale Invariant Feature Transform (SIFT) for texture analysis, in order to classify malaria parasites versus leukocytes in thick blood smears. In the first approach, the texture is represented by basic statistics of SIFT features like the mean value and standard deviation. The second technique is a vector-quantization approach called bag of visual words, which uses k-means clustering for creating one feature vector per image. Here, the number of clusters is the crucial variable, which is analyzed in this contribution. Furthermore, we compare two different ways to choose the SIFT keypoints. The default SIFT keypoint detector is performed, or a dense grid of equally distributed points in the image determines the location of the keypoints. The results show that grid keypoints yield the best performance and classification in general is possible with both methods using SIFT.

Daniel Erpenbeck, Tobias Bergen, Thomas Wittenberg, Egbert Tannich, Christine Wegner, Christian Münzenmayer, Michaela Benz
Automatic Finger Joint Detection for Volumetric Hand Imaging

We propose a fully automatic method for robust finger joint detection in T1 weighted magnetic resonance imaging (MRI) sequences for initialization of statistical shape model (SSM) based segmentation. We propose a robust method that only relies on few training samples. Therefore, a parallel-beam forward projection is calculated on the MRI volume. A trained Bagging classifier will detect the joints in 2D which are then splatted into the 3D volume. For evaluation, leave-one-out cross validation was performed. The detection of the joints in 2D yielded a Dice score of 0.67 ± 0.056 with respect to a manually obtained ground truth. For the initialization of SSM-based segmentation algorithms, the results are very promising.

Johannes Bopp, Mathias Unberath, Stefan Steidl, Rebecca Fahrig, Isabelle Oliveira, Arnd Kleyer, Andreas Maier
Feature Selection Framework for White Matter Fiber Clustering Based on Normalized Cuts

Due to its ability to automatically identify spatially and functionally related white matter fiber bundles, fiber clustering has the potential to improve our understanding of white matter anatomy. The normalized cuts (NCut) criterion has proven to be a suitable method for clustering fiber tracts. In this work, we show that the NCut value can be used for unsupervised feature selection as a measure for the quality of clustering. We further present a method how feature selection can be improved by penalizing spatially illogical clustering results, which is achieved by employing the Silhouette index for a fixed set of geometric features.

Simon Koppers, Christoph Hebisch, Dorit Merhof
Automated Heart Localization in Cardiac Cine MR Data

Cardiac MRI is the modality of choice in cardiology for the assessment of the ventricular function, since the heart’s anatomy is visualized with high resolution. This functional assessment is a timeconsuming task for the cardiac radiologist when performed manually. Therefore, computer-driven diagnostic solutions are of particular importance for clinical applications. In order to ensure the success of such computer aided diagnosis algorithms however, a correct, initial localization of the heart region in the raw data is crucial. For this purpose, we present a novel, simple and fully automated approach for localizing the heart region in cardiac cine MR data. Without the need for prior knowledge or training datasets, this method enables a ready to use application for a robust localization. This processing step is a fundamental component for the development of integrated automated applications.

Roxana Hoffmann, Franziska Bertelshofer, Christian Siegl, Rolf Janka, Roberto Grosso, Günther Greiner
Combining Active Contours and Active Shapes for Segmentation of Fluorescently Stained Cells
Application to Virology

Fluorescence microscopy is an essential tool to examine hostpathogen interactions such as the influence of Fascin on cell-cell contacts between infected and uninfected cells. Manual analysis of fluorescence microscopy images is prone to errors leading to inter- and intra-observer variability. To increase reproducibility and objectivity, automated and semi-automated image processing methods are required. For a reliable segmentation of touching and overlapping cells, we propose an active contours algorithm extended by an energy term based on an active shape model. The algorithm is evaluated on confocal cell image data labeled by a human expert.

Veit Wiesmann, Christine Groß, Daniela Franz, Andrea K. Thoma-Kreß, Thomas Wittenberg
Geometrieplanung und Bildregistrierung mittels bimodaler Fiducial-Marker für Magnetic Particle Imaging

Magnetic Particle Imaging (MPI) ist ein Bildgebungsverfahren, welches die Visualisierung der räumlichen Verteilung superparamagnetischer Nanopartikel ermöglicht. Im Gegensatz zu anderen Bildgebungsmodalit äten liefert MPI keine morphologische Information, was die Positionierung des darzustellenden Objektes im Magnetfeldzentrum und die Interpretation von MPI-Bildern erschwert. Daher werden zusätzlich Daten einer morphologischen Bildgebungsmodalität akquiriert und mit den MPI-Daten überlagert. Um die Bildregistrierung beider Modalit äten automatisieren zu können, werden in dieser Arbeit bimodale Fiducial-Marker entwickelt. Die Marker ermöglichen im MPI zusätzlich eine präzise Positionierung.

Franziska Werner, Caroline Jung, Martin Hofmann, Johannes Salamon, Rene Werner, Dennis Säring, Michael G. Kaul, Kolja Them, Oliver M. Weber, Tobias Mummert, Gerhard Adam, Harald Ittrich, Tobias Knopp
Partially Rigid 3D Registration of Flexible Tissues in High Resolution Anatomical MRI

This contribution introduces a method for partially rigid 3D registration of high resolution magnetic resonance (MR) images of the eye globes (EG) and the optic nerve sheaths (ONS) based on the reconstruction of their 3D models. Conventional registration methods do not preserve anatomical structures in such a way that quantitative anatomical comparisons could be computed. Therefore, iterative closest points (ICP) registration method has been extended to enable partial rigid registration (PICP) of flexible tissue structures within certain spatially limited areas. The results of the proposed approach are compared with the non-linear registration method ART. It was shown that PICP approach considerably improved the matching quality of local tissue and preserved anatomical structure at the same time.

Stepan Pazekha, Darius Gerlach, Uwe Mittag, Rainer Herpers
Comparison of Rigid Gradient-Based 2D/3D Registration Using Projection and Back-Projection Strategies

In this paper, a comparison of single-view gradient-based 2D/3D rigid registration methods is presented. To achieve dimensional correspondence between the images, projection and back-projection strategies have been proposed in the literature. Two similarity measures that are applicable for both strategies are involved in the comparison. Extensions of the similarity measure are proposed and compared to the original proposals. It is demonstrated that the projection strategy achieves a median accuracy up to 0.8 mm, which outperforms the back-projection strategy with a median accuracy of up to 1.1 mm. Our extension of the covariance-based similarity measure in combination with the back-projection strategy achieves the highest convergence range (up to 34.0 mm), while the the maximum achieved convergence range for the projection strategy is 31.3 mm.

Roman Schaffert, Jian Wang, Anja Borsdorf, Joachim Hornegger, Andreas Maier
Morphing Image Masks for Stacked Histological Sections Using Laplace’s Equation

This study introduces a semi-automatic method to segment brain tissue from background in stacks of registered 2D images collected during histological sectioning. It is designed for setups where automatic segmentation algorithms often fail. It facilitates a manual process by providing an efficient interpolation between image masks, thus requiring only a subset of images to be manually segmented. Assuming that images are already correctly registered one to another, interpolation is done by morphing between existing masks based on Laplace’s equation, derived from a well established model for mapping cortical thickness. We applied the proposed method successfully to segment whole brain image stacks with less than 10% of manually segmented sections. The results can be used as an input for subsequent high-level segmentation steps.

Martin Schober, Markus Axer, Marcel Huysegoms, Nicole Schubert, Katrin Amunts, Timo Dickscheid
Compression Impact on LIRE-based CBIR of Colonoscopy Data

In a large experimental study, the impact of lossy image compression standards on LIRE-based CBIR in different compression scenarios is assessed. Image retrieval is conducted using NBI-colonoscopic imagery with the aim of polyp dignity assessment. Results clearly indicate that (1) JPEG2000 compression does hardly impact retrieval results, (2) it is important to compress both query and retrieval data in case of JPEG and JPEG XR, and (3) some LIRE desciptors deliver good retrieval results on these data which calls for further investigations.

Peter Elmer, Michael Häfner, Toru Tamaki, Shinji Tanaka, Rene Thaler, Andreas Uhl, Shigeto Yoshida
Biometrische Messung der Pupillenreaktion

Die Messung der Pupillenweite und -reaktion (Pupillometrie) auf Lichtreize wird bereits seit langer Zeit zu diagnostischen Zwecken benutzt, um Aussagen über Vitalfunktionen zu tätigen, z.B. in Schlaflaboren zur Beurteilung der Schläfrigkeit. Ziel dieser Arbeit war die Entwicklung und Realisierung eines Versuchsaufbaus zur standardisierten und reproduzierbaren Messung des zeitlichen Verlaufs des Pupillendurchmessers in Abhängigkeit von definierten Lichtreizen. Anhand des Graphen wurden mehrere charakteristische Parameter bestimmt. Aufgrund einer hohen Bildrate von bis zu 560 Bildern pro Sekunde und einer geringen Beleuchtungsstärke von ca. 25 Lux wird mit dem Messaufbau eine hohe Genauigkeit erreicht. Ein Zusammenhang zwischen Alkoholkonsum und einer veränderten Reaktion der Pupille konnte nachgewiesen werden.

Christian Hintze, Johannes Junggeburth, Bernhard Hill, Dorit Merhof
Data Completeness Estimation for 3D C-Arm Scans with Rotated Detector to Enlarge the Lateral Field-of-View

In this paper, we describe a method to enlarge the field-ofview of those scan modes by rotating the detector such that instead of the detector width the diagonal of the detector limits the lateral field-of-view for a Short and two Large Volume Scan trajectories. After implementation of the modifications we obtain a gain of 25.8% in field-of-view diameter accompanied by a simultaneous loss of height of about 50 %. The coverage is increased by 20% for the Short Scan and by 16.7% for the Large Volume Scans. After introducing a detector shift trade-off we still increase the coverage field-of-view width while compensating the axial loss. Also a reduced source-to-detector distance has been investigated, which further increases the coverage. Finally, a Helical Large Volume Scan trajectory was simulated leading to the same width gain and coverage but increasing the height by 20.8% for the maximal shift and 33.3% for the trade-off version in comparison to a standard Large Volume Scan.

Daniel Stromer, Patrick Kugler, Sebastian Bauer, Günter Lauritsch, Andreas Maier
Make the Most of Time Temporal Extension of the iTV Algorithm for 4D Cardiac C-Arm CT

Gated 4D cardiac imaging with C-arm CT scanners suffers from insufficient image quality due to strong angular undersampling. To deal with this problem, we suggest an iterative reconstruction method with spatial and temporal total variation regularization based on an established framework which controls the relative contributions of raw data error minimization and regularization. This new method is tested on a simulated heart phantom and on two clinical data sets. We show that the additional use of temporal regularization is advantageous compared to spatial regularization exclusively, with the relative root mean square error lowered from 11.75% to 8.24% in the phantom study.

Viktor Haase, Oliver Taubmann, Yixing Huang, Gregor Krings, Günter Lauritsch, Andreas Maier, Alfred Mertins
Visualization of Vector Fields Derived from 3D Polarized Light Imaging

Polarized Light Imaging provides 3D fiber orientation models of brain connectivity at an ultra-high resolution. A vector is hereby assigned to each voxel as an estimate of fiber orientation in 3D space. In order to understand the spatial orientation of fibers and to analyze their course in the brain, visualization techniques are needed that keep the high resolution and make the large-volume data readable. We here present a new method for visualizing the entire vector field and prioritizing specific orientations or anatomical information, enabling for the first time the analysis of the complex architecture of nerve fibers at ultra-high resolution in 3D.

Nicole Schubert, David Gräßel, Uwe Pietrzyk, Katrin Amunts, Markus Axer
Clustering of Aortic Vortex Flow in Cardiac 4D PC-MRI Data

This paper presents a method for clustering aortic vortical blood flow using a reliable dissimilarity measure combined with a clustering technique. Current medical studies investigate specific properties of aberrant blood flow patterns such as vortices, since a correlation to the genesis and evolution of various cardiovascular diseases is assumed. The classification requires a precise definition of spatio-temporal vortex entities, which is performed manually. This task is time-consuming for larger studies and error-prone due to inter-observer variability. In contrast, our method allows an automatic and reliable vortex clustering that facilitates the vortex classification. We introduce an efficient calculation of a dissimilarity measure that groups spatio-temporally adjacent vortices. We combine our dissimilarity measure with the most commonly used clustering techniques. Each combination was applied to 15 4D PCMRI datasets. The clustering results were qualitatively compared to a manually generated ground truth of two domain experts.

Monique Meuschke, Kai Lawonn, Benjamin Köhler, Uta Preim, Bernhard Preim
Enhancing Visibility of Blood Flow in Volume Rendered Cardiac 4D PC-MRI Data

Four-dimensional phase-contrast magnetic resonance imaging (4D PC-MRI) is a method to non-invasively acquire blood flow in the aorta. This flow is commonly visualized as path lines inside of the vessels. Direct volume rendering (DVR) uses a transfer function to directly render the dataset without needing a manual segmentation. Since the transfer function can be manipulated on the fly, DVR allows fast exploration of the dataset. Using a simple intensity-based transfer function, however, either the intravascular blood flow would be hidden behind the vessel’s front side or the entire vessel has to be culled from the visualization. Therefore, we propose an automated mechanism that reveals the vessel anatomy by removing their front sides based on the viewing direction. This creates an effect similar to frontface culling on surface renderings. The visibility of focus objects inside the anatomy is guaranteed while spatial awareness is mostly maintained due to the presence of anatomical structures as context information. While we were able to confirm the effectiveness of our method in an interview with a collaborating radiologist, it still proved to be somewhat limited by the data quality and lack of a manual segmentation.

Benjamin Behrendt, Benjamin Köhler, Uta Preim, Bernhard Preim
Adaptive Animations of Vortex Flow Extracted from Cardiac 4D PC-MRI Data

Four-dimensional phase-contrast magnetic resonance imaging (4D PC-MRI) acquisitions facilitate the assessment of time-resolved, 3D blood flow information. Vortex flow in the aorta or pulmonary artery is of special clinical interest, since it can be an indicator for different pathologies of the cardiovascular system. Qualitative methods commonly employ animated pathlines to depict the time-varying flow. Visual clutter is reduced via vortex flow extraction. Since vortices are often not present during the full cardiac cycle, parts of the animation show an empty vessel or flow that is not of interest. To exploit the given video length more efficiently, we propose Vortex Animations with Adaptive Speed (VAAS), which depend on the time- and view-dependent feature visibility. Collaborating experts considered our technique as useful for presentations, case discussions and documentation purposes. Four diverse datasets are presented in a qualitative evaluation.

Benjamin Köhler, Uta Preim, Matthias Grothoff, Matthias Gutberlet, Bernhard Preim
EchoTrack für die navigierte ultraschall-geführte Radiofrequenzablation der Schilddrüse

Eine der Hauptursachen für die fehlende Translation computerbasierter Assistenzsysteme in die klininische Routine ist die schlechte Integrierbarkeit in den klinischen Arbeitsablauf. Um dieses Problem zu lösen, wurde kürzlich ein neuartiges Navigationskonzept vorgestellt, welches die Darstellung von medizinischen Instrumenten in Relation zu Ultraschalldaten auf Basis einer einzigen mobilen Modalität (EchoTrack) ermöglicht. In diesem Beitrag wird die neue Methode erstmals für die ultraschallgef ührte Radiofrequenzablation (RFA) von Schilddrüsenknoten adaptiert. Die wissenschaftlichen Beiträge beinhalten (1) ein Konzept zum Lokalisieren der benötigten RFA Sonde unter Beachtung von Sterilit ätsbedingungen, (2) die Charakterisierung der Bewegung von Halsstrukturen, welche bei ultraschallgeführten Interventionen durch Schlucken, Sprache, Atem, Halsdrehung und mechanischen Druck der Ultraschallsonde verursacht wird, sowie (3) die Quantifizierung des Fehlers, welche bei der Schätzung der Lage von Zielstrukturen mit einem elektromagnetischen Hautmarker entstehen. Die Ergebnisse zeigen, dass die vorgenommenen Adaptionen den Einsatz des EchoTrack-Systems für die Schilddrüsen-RFA mit hinreichender Punktionsgenauigkeit ermöglichen. Bei den getesteten Störbewegungen kommt es jedoch zu Verschiebungen anatomischer Strukturen von über 1 cm, weshalb beim Einsatz eines Hautmarkers ein ausreichender Sicherheitsabstand mit eingerechnet werden sollte.

Nasrin Bopp, Alfred Michael Franz, Dominique Cheray, Stefan Delorme, Hüdayi Korkusuz, Christian Erbelding, Lena Maier-Hein
A Memory Management Library for CT-Reconstruction on GPUs

Driven by improved computational throughput, multi- and many-core processors have been increasingly used in medical image processing. As these systems contain a discrete memory node, programmers have to manually manage the data transfer. To improve throughput by overlapping data transfers and task execution, special hardware details have to be known and should be considered with care. Data management could be even more tedious when the data size exceeds the GPU memory. In this work, we present a library that provides a convenient interface for CT reconstruction. Further, it contains a transparent data management, automatic data partitioning in case the GPU memory is insufficient, and overlapping techniques for improved performance. Our evaluations reveal that the library is able to reduce the amount of necessary code lines by ≈ 63% with respect to a comparable manual implementation. Additionally, a speedup of 38.1% for a volume size of 256 (10.7% for a volume size of 512) could be achieved by the library’s overlapping technique.

Hao Wu, Martin Berger, Andreas Maier, Daniel Lohmann
Assessing Out-of-the-box Software for Automated Hippocampus Segmentation

A comparison of four out-of-the-box software packages for automated hippocampus segmentation reveals that AHEAD and Freesurfer deliver the most satisfying results in terms of software usability and segmentation reliability and are thus recommended to be used in a fused manner.

Michael Gschwandtner, Yvonne Höller, Michael Liedlgruber, Eugen Trinka, Andreas Uhl
Aorta Segmentation in Axial Cardiac Cine MRI via Graphical Models

We propose an automatic approach to aorta segmentation in axial cardiac cine MRI. The segmentation task is formulated as a probabilistic inference problem, seeking for the most probable constellation of aorta locations and shapes in time. To this end, a graphical model is developed that implements the mutual dependencies of the aorta parameters along the cine sequence. Our approach integrates effective means of manual guidance for post-correction in case of erroneous results, requiring only user interaction where necessary. Experiments on a data set of 20 cine sequences showed average Dice coefficients close to the interreader variability while outperforming previous work in the field. Only two post-corrections were required for the entire data set. Results also indicate high stability of our approach w.r.t. re-parameterization.

Marko Rak, Julian Alpers, Alena-Kathrin Schnurr, Klaus-Dietz Tönnies
Automatic Detection of Ostia in the Left Atrium

Atrial fibrillation is a wide spread heart arrhythmia that can be treated by catheter ablation. This procedure may involve a planning step which can be performed based on a 3D model of the patient’s left atrium (LA) segmented from a CT or MRI volume. One of the first decisions to make during treatment is whether to use a single-shot device or a radio-frequency catheter. This decision is based on the size of pulmonary vein ostium and the ablation path as defined by ablation lines. Recently a method for automatic annotation of ablation lines was proposed. It is based on the position of the pulmonary (PV) vein ostia of a left atrium. To facilitate a fully automatic approach, we present a novel learning-based method based on mesh skeletonization that automatically detects pulmonary vein ostium positions in a 3D LA model. In our evaluation, we found a success rate of 86% and an average error of 4.3±2.6 mm.

Matthias Hoffmann, Martin Koch, Norbert Strobel, Andreas Maier
Light Field Particle Image Velocimetry by Plenoptic Image Capturing for 3D-Display of Simulated Blood Flow in Cerebral Aneurysms

Particle image velocimetry of simulated blood flow within transparent 3D resin models of cerebral aneurysms allows 3D real-time simulation of the flow-dynamics with and without implants as a new pre-interventional opportunity for endovascular treatment. The purpose of the experiments is to demonstrate the feasibility of 3D-flow visualization in small volumes (below 5mm diameter) of particle flow although capturing the flow dynamics by a monocular camera set-up.

Matthias F. Carlsohn, André Kemmling, Arne Petersen, Lennart Wietzke
Evaluation of Time-Dependent Wall Shear Stress Visualizations for Cerebral Aneurysms

For the rupture risk assessment of cerebral aneurysms, the blood flow is approximated using computational fluid dynamics (CFD). Unsteady CFD inflow conditions yield time-dependent vector fields, from which the wall shear stress (WSS), an important indicator for rupture risk in clinical research, is extracted. For WSS evaluation, its magnitude is usually color-coded on the aneurysm surface mesh. Hence, time-dependent results would require an animated depiction of all WSS values. Instead, mostly a static 3D representation is employed by choosing a single point in time, e.g., peak-systole. Our developed framework comprises such a static WSS visualization, an animated visualization, as well as a new technique including statistic information. We compare them in a user study and match them against a ground truth extracted by a clinical expert. The new technique with statistical information turned out to be superior compared to the ground truth for the depiction of time-dependent WSS.

Sylvia Glaßer, Jan Hirsch, Philipp Berg, Patrick Saalfeld, Oliver Beuing, Gabor Janiga, Bernhard Preim
Nanopartikeldetektion in Zellpräparaten mit dem Hyperspektral-Imaging-Verfahren

Mit Hilfe des HyperSpectral-Imaging (HSI)-Verfahrens, das auf einer intensivierten Dunkelfeld-Mikroskopie basiert, konnte die räumliche Verteilung von Gold-Nanopartikeln (Au-NP) in Zellen dargestellt werden. Einzelne Partikel wurden anhand ihres Spektrums identifiziert, wobei die hohe Gesamtlichtintensität zu markanten Unterschieden der spektralen Intensitätsamplituden führte. Diese konnten durch eine Neuskalierung der Amplitudenwerte kompensiert werden, was weitergehende Vergleiche und Auswertungen erleichtert. Zudem zeigten die registrierten Spektren in ihrem Verlauf eine Abhängigkeit von der Partikelgröße. Insgesamt liefert das HSI-Verfahren, wie hier am Beispiel von Au-NP gezeigt wird, einen wichtigen Beitrag zur Lokalisation und Identifizierung von NP und damit zur nanotoxikologischen Bewertung von exponierten Zellen und Gewebeverbänden.

Undral Erdenetsogt, Antje Vennemann, Martin Wiemann, Hans-Gerd Lipinski
Skully
An Educational Web Application for the Human Skull

One fundamental topic in anatomy is the structure of the human skull. Common sources for studying are books, atlases and other two dimensional material which makes it hard to optimally locate the separate parts of the human skull in three dimensional space and understand the relations between them. We developed a browser application visualizing three dimensional models of the human skull. The user can interact with those models, e.g. move them freely to gain insight in the three dimensional structure and position of the single bones, highlight separate bones or groups of them and view textual annotations. In a developer mode those annotations can be edited easily by anatomy experts or course tutors to contain all relevant information. Furthermore the application is platform independent, it can used on e.g. computers with different operating systems, tablets and mobile phones.

Franziska Bertelshofer, Oliver Brehm, Paul Fitzner, Jacqueline Lammert, Friedrich Paulsen, Rolf Janka, Günther Greiner
Photographic Documentation by Mobile Devices Integrated into Case Report Forms of Clinical Trials

Subject’s medical data in controlled clinical trials is captured in electronic case report forms. We present a mobile application (App) that utilizes the smartphone-integrated camera for integrating photographic documentation directly from subject’s bed-side. Color reference cards are placed next to the wound and used for geometric and contrast registration. This ensures high image quality with the inexpensive consumer hardware. In addition, a code is detected from the card for subject identification. The App connects to an image analysis server and looks up the code-study-subject relation. Then, the smartphone connects with OpenClinica, an open source and electronic data capture system for clinical trials, which has been approved by the US Food and Drug Administration (FDA). The App is demonstrated by an ongoing clinical trial, where wound healing after a vascular surgery is followed up photographically. All 205 images collected in the study so far have been identified and integrated into subject’s eCRF correctly. Avoiding manual mapping of photographs to study subjects avoids errors and latency, decreases costs, and improves data security and privacy.

Daniel Haak, Aliaa Doma, Thomas M. Deserno
Detection and Quantification of Cytoskeletal Granules

The cytoskeleton is a dynamic scaffolding maintaining cell stability and motility. Keratin filaments form cytoskeletal networks in the cytoplasm of epithelial cells. Genetic mutations of keratin genes have been implicated in human skin diseases, such as epidermolysis bullosa simplex. Keratin network organization is severely impaired in these instances resulting in the formation of prominent granular aggregates. To gain an understanding of the pathomechanisms underlying keratin granule formation and to screen for factors affecting this process, an automated segmentation routine of keratin granules is proposed in this paper. As such, the presented method holds a lot of potential for an objective assessment of keratin organization to improve treatment of genetic keratinopathies.

Dennis Eschweiler, Jakob Unger, Kraisorn Chaisaowong, Mugdha Sawant, Reinhard Windoffer, Rudolf E. Leube, Dorit Merhof
Towards Computer-Assisted Diagnosis of Precursor Colorectal Lesions

Colorectal cancer (CRC) is the fourth most common cancer in men worldwide (International Agency of Research on Cancer, 2008). In many countries, regular colonoscopy screening is established as a crucial strategy for CRC prevention. During colonoscopy screening, detected precursor lesions such as adenomas and serrated polyps can be removed, thus reducing CRC incidence and mortality. After such a polypectomy, histological diagnosis is fundamental. With continuously rising numbers of participants in screening programs as well as removed polyps, an increased demand exists for an automated pre-screening and classification of colorectal lesions in digitized histological slides. Hence, in this study, initial experiments were conducted to evaluate which approaches are suitable for an automated pre-screening and classification of colorectal polyps into the known entities with different risk profiles. According to the latest WHO classification, key factors for distinguishing precursor lesions are serration, distribution of serration and cytological dysplasia. In this study, we investigate a learning scheme based on decision trees to identify image features, which precisely describe these key factors. It is shown that shape factors and histogram-based features extracted from digitized histological slides are suitable for computer-assisted prescreening and classification of precursor colorectal lesions.

Claudia Dach, Tilman Rau, Carol Geppert, Alexander Hartmann, Thomas Wittenberg, Christian Münzenmayer
Automatic Detection of Relevant Regions for the Morphological Analysis of Bone Marrow Slides

The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. For the conventional cytological analysis the bone marrow aspirate smear is stained and examined by means of a light microscope. At first the cell density, the bone marrow fat content and qualitative changes of the cells are observed in a midlevel magnification. Afterwards, cells of different types are identified and counted. Especially this step is time-consuming, subjective, tedious and error-prone. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason an automation of the bone marrow analysis is pursued. In the meantime semi-automated prototypes for the automated analysis are available where the determination of relevant regions is done manually. In order to accomplish a fully automated workflow the relevant regions have to be found automatically. In this work we propose a method for the automatic determination of relevant regions which is based on a decision tree using color features. 1024 virtual slides of bone marrow smears are used for the development and evaluation of the proposed approach. For the test dataset the accuracy of the trained decision tree classifier is 99.85 %, the sensitivity is 65.88 % and the specificity is 99.98 %. Also a color coded evaluation in the virtual slides is provided. With the proposed method it is possible to detect relevant regions automatically for the automated morphological analysis for the first time. This method provides a valuable suggestion of regions to analyze in high magnification with a high accuracy.

Sebastian Krappe, Richard Leisering, Torsten Haferlach, Thomas Wittenberg, Christian Münzenmayer
Image Quality Analysis of Limited Angle Tomography Using the Shift-Variant Data Loss Model

This paper investigates the application of the shift-variant data loss (SVDL) model in image quality assessment for a state-of-theart reconstruction technique, the weighted total variation (wTV), in limited angle tomography. The SVDL model is used to analyze the acquired frequency information in 2D fan-beam limited angle tomography. The wTV algorithm is applied to reconstruct some specific mathematical phantoms. The experiments show that the reconstructed image quality depends on the relation of the source trajectory and geometric structure of the imaged object, position, shape, size and orientation in particular.

Yixing Huang, Guenter Lauritsch, Mario Amrehn, Oliver Taubmann, Viktor Haase, Daniel Stromer, Xiaolin Huang, Andreas Maier
Image Descriptors in Angiography

Despite recent advances in the field of image-guided interventions (IGI), the bottleneck for Angiography/X-ray guided procedures in particular is accurate and robust 2D-3D image alignment. The conventional, straight-forward parameter optimization approach is known to be ill-posed and less efficient. Retrieval-based approaches may be of superior choice here. However, this requires salient and robust image features, which can handle the difficulties of Angiographic images such as high level of noise and contrast variance. In this paper, we investigate state-of-the-art features of the field of computer vision regarding the applicability and reliability in the challenging scenario of Angiography.

Katharina Hofschen, Timo Geissler, Nicola Rieke, Christian Schulte zu Berge, Nassir Navab, Stefanie Demirci
Suggesting Optimal Delineation Planes for Interactive 3D Segmentation

Many tasks in clinical practice and medical image research require a good segmentation of anatomical structures. All too often this has to be done manually, which is a very time-consuming process. Several tools aim at speeding up this process by using reconstruction algorithms to interpolate structures based on manually provided contours. The resulting accuracy of these interpolations as well as the required amount of time depends very much on the placement of the contour information. In this work we present an algorithm, which reduces the time required by automatically suggesting optimal delineation planes. Based on the distance between the reconstructed 3D surface mesh and edges in the original image it determines which next plane will likely result in the maximum improvement for the 3D reconstruction. The proposed approach was evaluated by comparing segmentations that are created with purely manual plane selection, with segmentations that are created using the automatic plane suggestion. We show a significant reduction in the time required to segment a number of structures with only a slight decrease in segmentation accuracy.

Andreas Fetzer, Nico Riecker, Jasmin Metzger, Caspar Goch, Hans-Peter Meinzer, Marco Nolden
Backmatter
Metadaten
Titel
Bildverarbeitung für die Medizin 2016
herausgegeben von
Thomas Tolxdorff
Thomas M. Deserno
Heinz Handels
Hans-Peter Meinzer
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-49465-3
Print ISBN
978-3-662-49464-6
DOI
https://doi.org/10.1007/978-3-662-49465-3

Premium Partner