Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 15th European Congress on Digital Pathology, ECDP 2019, held in Warwick, UK in April 2019. The 21 full papers presented in this volume were carefully reviewed and selected from 30 submissions. The congress theme will be Accelerating Clinical Deployment, with a focus on computational pathology and leveraging the power of big data and artificial intelligence to bridge the gaps between research, development, and clinical uptake.



Image Datasets and Virtual Staining


Bringing Open Data to Whole Slide Imaging

Faced with the need to support a growing number of whole slide imaging (WSI) file formats, our team has extended a long-standing community file format (OME-TIFF) for use in digital pathology. The format makes use of the core TIFF specification to store multi-resolution (or “pyramidal”) representations of a single slide in a flexible, performant manner. Here we describe the structure of this format, its performance characteristics, as well as an open-source library support for reading and writing pyramidal OME-TIFFs.
Sébastien Besson, Roger Leigh, Melissa Linkert, Chris Allan, Jean-Marie Burel, Mark Carroll, David Gault, Riad Gozim, Simon Li, Dominik Lindner, Josh Moore, Will Moore, Petr Walczysko, Frances Wong, Jason R. Swedlow

PanNuke: An Open Pan-Cancer Histology Dataset for Nuclei Instance Segmentation and Classification

In this work we present an experimental setup to semi automatically obtain exhaustive nuclei labels across 19 different tissue types, and therefore construct a large pan-cancer dataset for nuclei instance segmentation and classification, with minimal sampling bias. The dataset consists of 455 visual fields, of which 312 are randomly sampled from more than 20K whole slide images at different magnifications, from multiple data sources. In total the dataset contains 216.4K labeled nuclei, each with an instance segmentation mask. We independently pursue three separate streams to create the dataset: detection, classification, and instance segmentation by ensembling in total 34 models from already existing, public datasets, therefore showing that the learnt knowledge can be efficiently transferred to create new datasets. All three streams are either validated on existing public benchmarks or validated by expert pathologists, and finally merged and validated once again to create a large, comprehensive pan-cancer nuclei segmentation and detection dataset PanNuke.
Jevgenij Gamper, Navid Alemi Koohbanani, Ksenija Benet, Ali Khuram, Nasir Rajpoot

Active Learning for Patch-Based Digital Pathology Using Convolutional Neural Networks to Reduce Annotation Costs

Methods to reduce the need for costly data annotations become increasingly important as deep learning gains popularity in medical image analysis and digital pathology. Active learning is an appealing approach that can reduce the amount of annotated data needed to train machine learning models but traditional active learning strategies do not always work well with deep learning. In patch-based machine learning systems, active learning methods typically request annotations for small individual patches which can be tedious and costly for the annotator who needs to rely on visual context for the patches. We propose an active learning framework that selects regions for annotation that are built up of several patches, which should increase annotation throughput. The framework was evaluated with several query strategies on the task of nuclei classification. Convolutional neural networks were trained on small patches, each containing a single nucleus. Traditional query strategies performed worse than random sampling. A K-centre sampling strategy showed a modest gain. Further investigation is needed in order to achieve significant performance gains using deep active learning for this task.
Jacob Carse, Stephen McKenna

Patch Clustering for Representation of Histopathology Images

Whole Slide Imaging (WSI) has become an important topic during the last decade. Even though significant progress in both medical image processing and computational resources has been achieved, there are still problems in WSI that need to be solved. A major challenge is the scan size. The dimensions of digitized tissue samples may exceed 100,000 by 100,000 pixels causing memory and efficiency obstacles for real-time processing. The main contribution of this work is representing a WSI by selecting a small number of patches for algorithmic processing (e.g., indexing and search). As a result, we reduced the search time and storage by various factors between (50%–90%), while losing only a few percentages in the patch retrieval accuracy. A self-organizing map (SOM) has been applied on local binary patterns (LBP) and deep features of the KimiaPath24 dataset in order to cluster patches that share the same characteristics. We used a Gaussian mixture model (GMM) to represent each class with a rather small (10%–50%) portion of patches. The results showed that LBP features can outperform deep features. By selecting only 50% of all patches after SOM clustering and GMM patch selection, we received 65% accuracy for retrieval of the best match, while the maximum accuracy (using all patches) was 69%.
Wafa Chenni, Habib Herbi, Morteza Babaie, Hamid R. Tizhoosh

Virtually Redying Histological Images with Generative Adversarial Networks to Facilitate Unsupervised Segmentation: A Proof-of-Concept Study

Approaches relying on adversarial networks facilitate image-to-image-translation based on unpaired training and thereby open new possibilities for special tasks in image analysis. We propose a methodology to improve segmentability of histological images by making use of image-to-image translation. We generate virtual stains and exploit the additional information during segmentation. Specifically a very basic pixel-based segmentation approach is applied in order to focus on the information content available on pixel-level and to avoid any bias which might be introduced by more elaborated techniques. The results of this proof-of-concept trial indicate a performance gain compared to segmentation with the source stain only. Further experiments including more powerful supervised state-of-the-art machine learning approaches and larger evaluation data sets need to follow.
Michael Gadermayr, Barbara M. Klinkhammer, Peter Boor

Virtualization of Tissue Staining in Digital Pathology Using an Unsupervised Deep Learning Approach

Histopathological evaluation of tissue samples is a key practice in patient diagnosis and drug development, especially in oncology. Historically, Hematoxylin and Eosin (H&E) has been used by pathologists as a gold standard staining. However, in many cases, various target specific stains, including immunohistochemistry (IHC), are needed in order to highlight specific structures in the tissue. As tissue is scarce and staining procedures are tedious, it would be beneficial to generate images of stained tissue virtually. Virtual staining could also generate in-silico multiplexing of different stains on the same tissue segment. In this paper, we present a sample application that generates FAP-CK virtual IHC images from Ki67-CD8 real IHC images using an unsupervised deep learning approach based on CycleGAN. We also propose a method to deal with tiling artifacts caused by normalization layers and we validate our approach by comparing the results of tissue analysis algorithms for virtual and real images.
Amal Lahiani, Jacob Gildenblat, Irina Klaman, Shadi Albarqouni, Nassir Navab, Eldad Klaiman

Evaluation of Colour Pre-processing on Patch-Based Classification of H&E-Stained Images

This paper compares the effects of colour pre-processing on the classification performance of H&E-stained images. Variations in the tissue preparation procedures, acquisition systems, stain conditions and reagents are all source of artifacts that can affect negatively computer-based classification. Pre-processing methods such as colour constancy, transfer and deconvolution have been proposed to compensate the artifacts. In this paper we compare quantitatively the combined effect of six colour pre-processing procedures and 12 colour texture descriptors on patch-based classification of H&E-stained images. We found that colour pre-processing had negative effects on accuracy in most cases – particularly when used with colour descriptors. However, some pre-processing procedures proved beneficial when employed in conjunction with classic texture descriptors such as co-occurrence matrices, Gabor filters and Local Binary Patterns.
Francesco Bianconi, Jakob N. Kather, Constantino C. Reyes-Aldasoro



Automated Segmentation of DCIS in Whole Slide Images

Segmentation of ducts in whole slide images is an important step needed to analyze ductal carcinoma in-situ (DCIS), an early form of breast cancer. Here, we train several U-Net architectures – deep convolutional neural networks designed to output probability maps – to segment DCIS in whole slide images and validate the optimal patch field of view necessary to achieve superior accuracy at the slide-level. We showed a U-Net trained at 5x achieved the best test results (DSC = 0.771, F1 = 0.601), implying the U-Net benefits from having wider contextual information. Our custom U-Net based architecture, trained to incorporate patches from all available resolutions, achieved test results of DSC = 0.759 (F1 = 0.682) showing improvement in the duct detecting capabilities of the model. Both architectures show comparable performance to a second expert annotator on an independent test set. This is preliminary work for a pipeline targeted at predicting recurrence risk in DCIS patients.
Nikhil Seth, Shazia Akbar, Sharon Nofech-Mozes, Sherine Salama, Anne L. Martel

A Two-Stage U-Net Algorithm for Segmentation of Nuclei in H&E-Stained Tissues

Nuclei segmentation is an important but challenging task in the analysis of hematoxylin and eosin (H&E)-stained tissue sections. While various segmentation methods have been proposed, machine learning-based algorithms and in particular deep learning-based models have been shown to deliver better segmentation performance. In this work, we propose a novel approach to segment touching nuclei in H&E-stained microscopic images using U-Net-based models in two sequential stages. In the first stage, we perform semantic segmentation using a classification U-Net that separates nuclei from the background. In the second stage, the distance map of each nucleus is created using a regression U-Net. The final instance segmentation masks are then created using a watershed algorithm based on the distance maps. Evaluated on a publicly available dataset containing images from various human organs, the proposed algorithm achieves an average aggregate Jaccard index of 56.87%, outperforming several state-of-the-art algorithms applied on the same dataset.
Amirreza Mahbod, Gerald Schaefer, Isabella Ellinger, Rupert Ecker, Örjan Smedby, Chunliang Wang

Automatic Detection of Tumor Buds in Pan-Cytokeratin Stained Colorectal Cancer Sections by a Hybrid Image Analysis Approach

This contribution introduces a novel approach to the automatic detection of tumor buds in a digitalized pan-cytokeratin stained colorectal cancer slide. Tumor buds are representing an invasive pattern and are frequently investigated as a new diagnostic factor for measuring the aggressiveness of colorectal cancer. However, counting the number of buds under the microscope in a high power field by eyeballing is a strenuous, lengthy and error-prone task, whereas an automated solution could save time for the pathologists and enhance reproducibility. We propose a new hybrid method that consists of two steps. First possible tumor bud candidates are detected using a chain of classical image processing methods. Afterwards a convolutional deep neural network is applied to filter and reduce the number of false positive candidates detected in the first step. By comparing the automatically detected buds with a gold standard created by manual annotations, we gain a score of 0.977 for precision and 0.934 for sensitivity in our test sets on over 8.000 tumor buds.
Matthias Bergler, Michaela Benz, David Rauber, David Hartmann, Malte Kötter, Markus Eckstein, Regine Schneider-Stock, Arndt Hartmann, Susanne Merkel, Volker Bruns, Thomas Wittenberg, Carol Geppert

Improving Prostate Cancer Detection with Breast Histopathology Images

Deep neural networks have introduced significant advancements in the field of machine learning-based analysis of digital pathology images including prostate tissue images. With the help of transfer learning, classification and segmentation performance of neural network models have been further increased. However, due to the absence of large, extensively annotated, publicly available prostate histopathology datasets, several previous studies employ datasets from well-studied computer vision tasks such as ImageNet dataset. In this work, we propose a transfer learning scheme from breast histopathology images to improve prostate cancer detection performance. We validate our approach on annotated prostate whole slide images by using a publicly available breast histopathology dataset as pre-training. We show that the proposed cross-cancer approach outperforms transfer learning from ImageNet dataset.
Umair Akhtar Hasan Khan, Carolin Stürenberg, Oguzhan Gencoglu, Kevin Sandeman, Timo Heikkinen, Antti Rannikko, Tuomas Mirtti

Multi-tissue Partitioning for Whole Slide Images of Colorectal Cancer Histopathology Images with Deeptissue Net

Tissue composition plays an essential role in diagnosis and prognosis of colorectal cancer (CRC). Studies have shown that the relative proportion of tissue composition on colorectal specimens is potentially prognostic of outcome in CRC patients. Some of the important tissue partitions include blood vessel, tumor epithelium, adipose tissue, mucosal glands, mucus, muscle, stroma, necrosis, immune cell, and background/other tissues. A challenge in accurately determining quantitative measurements of tissue composition however is in the need for automated tissue partitioning image analysis tools. Towards this goal, we present a Deeptissue Net, a deep learning strategy which involves integrating DenseNet with Focal Loss. In order to show the effectiveness of Deeptissue Net, the model was trained with 40 WSIs from one site and tested on 620 WSIs from two sites. 10 distinct tissue partitions are blood vessel, tumor epithelium, adipose tissue, mucosal glands, mucus, muscle, stroma, necrosis, immune cell, and background/other tissues. The ground truth for training and evaluating Deeptissue Net involved careful annotation of the different tissue compartments by expert pathologists. The Deeptissue net was trained with the tissue partitions delineated for the 10 classes on the 40 WSIs and subsequently evaluated on the remaining \(N=620\) datasets. By measuring with confusion matrices, the Deeptissue Net achieves the accuracy of 0.72, 0.84, and 0.88 in classifying mucus, stroma, and necrosis on the 2nd batch of Dataset 1; 0.85 and 0.96 in classifying mucus and muscle on Dataset 2, respectively, which significantly outperformed DenseNet and ResNet.
Jun Xu, Chengfei Cai, Yangshu Zhou, Bo Yao, Geyang Xu, Xiangxue Wang, Ke Zhao, Anant Madabhushi, Zaiyi Liu, Li Liang

Rota-Net: Rotation Equivariant Network for Simultaneous Gland and Lumen Segmentation in Colon Histology Images

Analysis of the shape of glands and their lumen in digitised images of Haematoxylin & Eosin stained colon histology slides can provide insight into the degree of malignancy. Segmenting each glandular component is an essential prerequisite step for subsequent automatic morphological analysis. Current automated segmentation approaches typically do not take into account the inherent rotational symmetry within histology images. We incorporate this rotational symmetry into an encoder-decoder based network by utilising group equivariant convolutions, specifically using the symmetry group of rotations by multiples of 90\(^\circ \). Our rotation equivariant network splits into two separate branches after the final up-sampling operation, where the output of a given branch achieves either gland or lumen segmentation. In addition, at the output of the gland branch, we use a multi-class strategy to assist with the separation of touching instances. We show that our proposed approach achieves the state-of-the-art performance on the GlaS challenge dataset.
Simon Graham, David Epstein, Nasir Rajpoot

Histopathological Image Analysis on Mouse Testes for Automated Staging of Mouse Seminiferous Tubule

Whole slide image (WSI) of mouse testicular cross-section contains hundreds of seminiferous tubules. Meanwhile, each seminiferous tubule also contains different types of germ cells among different histological regions. These factors make it a challenge to segment distinct germ cells and regions on mouse testicular cross-section. Automated segmentation of different germ cells and regions is the first step to develop a computerized spermatogenesis staging system. In this paper, a set of 28 H&E stained WSIs of mouse testicular cross-section and 209 Stage VI-VIII tubules images were studied to develop an automated multi-task segmentation model. A deep residual network (ResNet) is first presented for seminiferous tubule segmentation from mouse testicular cross-section. According to the types and distribution of germ cells in the tubules, we then present the other deep ResNet for multi-cell (spermatid, spermatocyte, and spermatogonia) segmentation and a fully convolutional network (FCN) for multi-region (elongated spermatid, round spermatid, and spermatogonial & spermatocyte regions) segmentation. To our knowledge, this is the first time to develop a computerized model for analyzing histopathological image of mouse testis. Three segmentation models presented in this paper show good segmentation performance and obtain the pixel accuracy of 94.40%, 91.26%, 93.47% for three segmentation tasks, respectively, which lays a solid foundation for the establishment of mouse spermatogenesis staging system.
Jun Xu, Haoda Lu, Haixin Li, Xiangxue Wang, Anant Madabhushi, Yujun Xu

Deep Features for Tissue-Fold Detection in Histopathology Images

Whole slide imaging (WSI) refers to the digitization of a tissue specimen which enables pathologists to explore high-resolution images on a monitor rather than through a microscope. The formation of tissue folds occur during tissue processing. Their presence may not only cause out-of-focus digitization but can also negatively affect the diagnosis in some cases. In this paper, we have compared five pre-trained convolutional neural networks (CNNs) of different depths as feature extractors to characterize tissue folds. We have also explored common classifiers to discriminate folded tissue against the normal tissue in hematoxylin and eosin (H&E) stained biopsy samples. In our experiments, we manually select the folded area in roughly 2.5 mm \(\times \) 2.5 mm patches at 20x magnification level as the training data. The “DenseNet” with 201 layers alongside an SVM classifier outperformed all other configurations. Based on the leave-one-out validation strategy, we achieved \(96.3\%\) accuracy, whereas with augmentation the accuracy increased to \(97.2\%\). We have tested the generalization of our method with five unseen WSIs from the NIH (National Cancer Institute) dataset. The accuracy for patch-wise detection was \(81\%\). One folded patch within an image suffices to flag the entire specimen for visual inspection.
Morteza Babaie, Hamid R. Tizhoosh

Computer-Assisted Diagnosis and Prognosis


A Fast Pyramidal Bayesian Model for Mitosis Detection in Whole-Slide Images

Mitosis detection in Hematoxylin and Eosin images and its quantification for mm\(^2\) is currently one of the most valuable prognostic indicators for some types of cancer and specifically for the breast cancer. In whole-slide images the main goal is to detect its presence on the full image. This paper makes several contributions to the mitosis detection task in whole-slide in order to improve the current state of the art and efficiency. A new coarse to fine pyramidal model to detect mitosis is proposed. On each pyramid level a Bayesian convolutional neural network is trained to compute class prediction and uncertainty on each pixel. This information is propagated top-down on the pyramid as a constraining mechanism from the above layers. To cope with local tissue and cell shape deformations geometric invariance is also introduced as a part of the model. The model achieves an F1-score of 82.6% on the MITOS ICPR-2012 test dataset when trained with samples from skin tissue. This is competitive with the current state of the art. In average a whole-slide is analyzed in less than 20 s. A new dataset of 8236 mitoses from skin tissue has been created to train our models.
Santiago López-Tapia, José Aneiros-Fernández, Nicolás Pérez de la Blanca

Improvement of Mitosis Detection Through the Combination of PHH3 and HE Features

Mitosis detection in hematoxylin and eosin (H&E) images is prone to error due to the unspecificity of the stain for this purpose. Alternatively, the inmunohistochemistry phospho-histone H3 (PHH3) stain has improved the task with a significant reduction of the false negatives. These facts point out on the interest in combining features from both stains to improve mitosis detection. Here we propose an algorithm that, taking as input a pair of whole-slides images (WSI) scanned from the same slide and stained with H&E and PHH3 respectively, find the matching between the stains of the same object. This allows to use both stains in the detection stage. Linear filtering in combination with local search based on a kd-tree structure is used to find potential matches between objects. A Siamese convolutional neural network (SCNN) is trained to detect the correct matches and a CNN model is trained for mitosis detection from matches. At the best of our knowledge, this is the first time that mitosis detection in WSI is assessed combining two stains. The experiments show a strong improvement of the detection F1-score when H&E and PHH3 are used jointly compared to the single stain F1-scores.
Santiago López-Tapia, Cristobal Olivencia, José Aneiros-Fernández, Nicolás Pérez de la Blanca

A New Paradigm of RNA-Signal Quantitation and Contextual Visualization for On-Slide Tissue Analysis

An objective digital pathology solution to quantify the ribonucleic acid (RNA) signal in tissue samples could enable analysis of gene expression changes in individual cancer and dysregulated normal cells (immune cells, etc.). Here, we present a new method that leverages the punctate RNA In-situ hybridization (ISH) signal to quantify gene expression, while maintaining tissue context and enabling single cell analysis and workflow. This digital pathology solution detects and quantifies the punctate dot signals generated by one- and two-color RNA ISH technology in formaldehyde fixed-paraffin embedded (FFPE) tissue. The digital pathology solution was implemented to determine the characteristics of individual spots including size, intensity, blurriness and roundness all of which were used to determine individual spot feature characteristics. Significantly, we determined that spots maintain similar characteristics irrespective of the RNA biomarker and/or tissue used. The verification on 31 microscope images shows agreement of R2 = 0.99 and a concordance correlation coefficient (CCC) = 0.99 for the total spot counts identified by the observer (115,154) and the algorithm (112,809). We have leveraged the unique detection features of the RNA ISH technology to develop a new method to quantify RNA signal while maintaining tissue context. It is anticipated that this method will enable analysis of gene expression changes in heterogeneous cancer and normal cells and tissues with single cell resolution.
Auranuch Lorsakul, William Day

Digital Tumor-Collagen Proximity Signature Predicts Survival in Diffuse Large B-Cell Lymphoma

Diffuse large B-cell lymphoma (DLBCL) is a heterogeneous tumor that originates from normal B-cells. A limited number of studies have investigated the role of acellular stromal microenvironment on outcome in DLBCL. Here, we propose a novel digital proximity signature (DPS) for predicting overall survival (OS) in DLBCL patients. We propose a novel end-to-end multi-task deep learning model for cell detection and classification and investigate the spatial proximity of collagen (type VI) and tumor cells for estimating the DPS. To the best of our knowledge, this is the first study that performs automated analysis of tumor and collagen on DLBCL to identify potential prognostic factors. Experimental results favor our cell classification algorithm over conventional approaches. In addition, our pilot results show that strongly associated tumor-collagen regions are statistically significant (p = 0.03) in predicting OS in DLBCL patients.
Talha Qaiser, Matthew Pugh, Sandra Margielewska, Robert Hollows, Paul Murray, Nasir Rajpoot

An Integrated Multi-scale Model for Breast Cancer Histopathological Image Classification Using CNN-Pooling and Color-Texture Features

Breast cancer is one of the most common human neoplasms in women, commonly diagnosed through histopathological microscopy imaging. The automated classification of histopathology images can relieve some workload of pathologists by triaging the cases. Knowing that histopathological images show a high degree of variability, useful information is often obtained at different optical magnification levels in order to make the correct diagnosis. For automated scoring, if there are differences in the patient’s score at each considered magnification, the decision may not be reliable if only one magnification level is taken into consideration. This study proposes an integrated model in which scores across magnifications are combined by weights estimated from the least square methods. Moreover, unlike the existing methods, we consider a novel heterogeneous committee which includes deep and traditional members, to design a system for each magnification. As few studies have shown, such in an ensemble, often only a subset of members is sufficient to provide enough discriminative information. Hence, we use an information theoretic measure (ITS) to select optimal members for each magnification. We use publicly available BreaKHis dataset for the experimentation, and demonstrate that the proposed approach yield comparable or better performance when compared with most CNN based frameworks.
Vibha Gupta, Arnav Bhavsar

Icytomine: A User-Friendly Tool for Integrating Workflows on Whole Slide Images

We present Icytomine, a user-friendly software platform for processing large images from slide scanners. Icytomine integrates in one unique framework the tools and algorithms that were developed independently on Icy and Cytomine platforms to visualise and process digital pathology images. We illustrate the power of this new platform through the design of a dedicated program that uses convolutional neural network to detect and classify glomeruli in kidney biopsies coming from a multicentric clinical study. We show that by streamlining the analytical capabilities of Icy with the AI tools found in Cytomine, we achieved highly promising results.
Daniel Felipe Gonzalez Obando, Diana Mandache, Jean-Christophe Olivo-Marin, Vannary Meas-Yedid


Weitere Informationen

Premium Partner