Skip to main content
Top

2023 | Book

Mitosis Domain Generalization and Diabetic Retinopathy Analysis

MICCAI Challenges MIDOG 2022 and DRAC 2022, Held in Conjunction with MICCAI 2022, Singapore, September 18–22, 2022, Proceedings

insite
SEARCH

About this book

This book constitutes two challenges that were held in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022, which took place in Singapore during September 18-22, 2022.

The peer-reviewed 20 long and 5 short papers included in this volume stem from the following three biomedical image analysis challenges:

Mitosis Domain Generalization Challenge (MIDOG 2022), Diabetic Retinopathy Analysis Challenge (CRAC 2022)

The challenges share the need for developing and fairly evaluating algorithms that increase accuracy, reproducibility and efficiency of automated image analysis in clinically relevant applications.

Table of Contents

Frontmatter

DRAC

Frontmatter
nnU-Net Pre- and Postprocessing Strategies for UW-OCTA Segmentation Tasks in Diabetic Retinopathy Analysis
Abstract
Ultra-wide (UW) optical coherence tomography angiography (OCTA) imaging provides new opportunities for diagnosing medical diseases. To further support doctors in the recognition of diseases, automated image analysis pipelines would be helpful. Therefore the MICCAI DRAC 2022 challenge was carried out, which provided a standardized UW (swept-source) OCTA data set for testing the effectiveness of various algorithms on a diabetic retinopathy (DR) dataset. Our team tried to train well-performing segmentation models for UW-OCTA analysis and was finally ranked under the three top-performing teams for segmenting DR lesions. This paper, therefore, summarizes our proposed strategy for this task and further describes our approach for image quality assessment and DR Grading.
Felix Krause, Dominik Heindl, Hana Jebril, Markus Karner, Markus Unterdechler
Automated Analysis of Diabetic Retinopathy Using Vessel Segmentation Maps as Inductive Bias
Abstract
Recent studies suggest that early stages of diabetic retinopathy (DR) can be diagnosed by monitoring vascular changes in the deep vascular complex. In this work, we investigate a novel method for automated DR grading based on ultra-wide optical coherence tomography angiography (UW-OCTA) images. Our work combines OCTA scans with their vessel segmentations, which then serve as inputs to task specific networks for lesion segmentation, image quality assessment and DR grading. For this, we generate synthetic OCTA images to train a segmentation network that can be directly applied on real OCTA data. We test our approach on MICCAI 2022’s DR analysis challenge (DRAC). In our experiments, the proposed method performs equally well as the baseline model.
Linus Kreitner, Ivan Ezhov, Daniel Rueckert, Johannes C. Paetzold, Martin J. Menten
Bag of Tricks for Diabetic Retinopathy Grading of Ultra-Wide Optical Coherence Tomography Angiography Images
Abstract
The performance of disease classification can be improved through improvements in the training process, such as changes in data augmentation, optimization methods, and deep learning model architectures. In the Diabetic Retinopathy Analysis Challenge, we employ a series of techniques to enhance the performance of the diabetic retinopathy grading. In this paper, we examine a collection of these improvements and empirically evaluate their impact on the final model accuracy through experiments. Experiments show that these improvements can significantly improve the performance of the model. For this task, we use a single SeResNext to improve the validation score from 0.8322 to 0.8721.
Renyu Li, Yunchao Gu, Xinliang Wang, Sixu Lu
Deep Convolutional Neural Network for Image Quality Assessment and Diabetic Retinopathy Grading
Abstract
Quality assessment of ultra-wide optical coherence tomography angiography (UW-OCTA) images followed by lesion segmentation and proliferatived diabetic retinopathy (PDR) detection is of great significance for the diagnosis of diabetic retinopathy. However, due to the complexity of UW-OCTA images, it is challenging to achieve automatic image quality assessment and PDR detection in a limited dataset. This work presented a fully automated convolutional neural network-based method for image quality assessment and retinopathy grading. In the first stage, the dataset was augmented to eliminate the category imbalance problem. In the second stage, the EfficientNet-B2 network, pre-trained on ImageNet, was used for quality assessment and lesion grading of UW-OCTA images. We evaluated our method on the DRAC2022 dataset. A quadratic weighted kappa score of 0.7704 was obtained on the task 2 image quality assessment test set and 0.8029 on the task 3 retinopathy grading test set.
Zhenyu Chen, Liqin Huang
Diabetic Retinal Overlap Lesion Segmentation Network
Abstract
Diabetic retinopathy(DR) is the major cause of blindness, and the pathogenesis is unknown. Ultra-wide optical coherence tomography angiography imaging (UW-OCTA) can help ophthalmologists to diagnose DR. Automatic and accurate segmentation of lesions is essential for the diagnosis of DR, yet accurate identification and segmentation of lesions from UW-OCTA images remains a challenge. We proposed a modified nnUNet named nnUNet-CBAM. Three networks were trained to segment each lesion separately. Our method was evaluated in DRAC2022 diabetic retinopathy analysis challenge, where segmentation results were tested on 65 UW-OCTA images. These images are standardized UW-OCTA. Our method achieved a mean dice similarity coefficient (mDSC) of 0.4963 and a mean intersection over union (mIOU) of 0.3693.
Zhiqiang Gao, Jinquan Guo
An Ensemble Method to Automatically Grade Diabetic Retinopathy with Optical Coherence Tomography Angiography Images
Abstract
Diabetic retinopathy (DR) is a complication of diabetes, and one of the major causes of vision impairment in the global population. As the early-stage manifestation of DR is usually very mild and hard to detect, an accurate diagnosis via eye-screening is clinically important to prevent vision loss at later stages. In this work, we propose an ensemble method to automatically grade DR using ultra-wide optical coherence tomography angiography (UW-OCTA) images available from Diabetic Retinopathy Analysis Challenge (DRAC) 2022. First, we adopt the state-of-the-art classification networks, i.e., ResNet, DenseNet, EfficientNet, and VGG, and train them to grade UW-OCTA images with different splits of the available dataset. Ultimately, we obtain 25 models, of which, the top 16 models are selected and ensembled to generate the final predictions. During the training process, we also investigate the multi-task learning strategy, and add an auxiliary classification task, the Image Quality Assessment, to improve the model performance. Our final ensemble model achieved a quadratic weighted kappa (QWK) of 0.9346 and an Area Under Curve (AUC) of 0.9766 on the internal testing dataset, and the QWK of 0.839 and the AUC of 0.8978 on the DRAC challenge testing dataset.
Yuhan Zheng, Fuping Wu, Bartłomiej W. Papież
Bag of Tricks for Developing Diabetic Retinopathy Analysis Framework to Overcome Data Scarcity
Abstract
Recently, diabetic retinopathy (DR) screening utilizing ultra-wide optical coherence tomography angiography (UW-OCTA) has been used in clinical practices to detect signs of early DR. However, developing a deep learning-based DR analysis system using UW-OCTA images is not trivial due to the difficulty of data collection and the absence of public datasets. By realistic constraints, a model trained on small datasets may obtain sub-par performance. Therefore, to help ophthalmologists be less confused about models’ incorrect decisions, the models should be robust even in data scarcity settings. To address the above practical challenging, we present a comprehensive empirical study for DR analysis tasks, including lesion segmentation, image quality assessment, and DR grading. For each task, we introduce a robust training scheme by leveraging ensemble learning, data augmentation, and semi-supervised learning. Furthermore, we propose reliable pseudo labeling that excludes uncertain pseudo-labels based on the model’s confidence scores to reduce the negative effect of noisy pseudo-labels. By exploiting the proposed approaches, we achieved 1st place in the Diabetic Retinopathy Analysis Challenge (Code is available at https://​github.​com/​vuno/​DRAC22_​MICCAI_​FAI).
Gitaek Kwon, Eunjin Kim, Sunho Kim, Seongwon Bak, Minsung Kim, Jaeyoung Kim
Deep-OCTA: Ensemble Deep Learning Approaches for Diabetic Retinopathy Analysis on OCTA Images
Abstract
The ultra-wide optical coherence tomography angiography (OCTA) has become an important imaging modality in diabetic retinopathy (DR) diagnosis. However, there are few researches focusing on automatic DR analysis using ultra-wide OCTA. In this paper, we present novel and practical deep-learning solutions based on ultra-wide OCTA for the Diabetic Retinopathy Analysis Challenge (DRAC). In the first task of segmentation of DR lesions, we utilize UNet and UNet++ to segment three lesions with strong data augmentation and model ensemble. In the second task of image quality assessment, we create an ensemble of Inception-V3, SE-ResNeXt, and Vision Transformer models. Pre-training on the large dataset as well as the hybrid MixUp and CutMix strategy are both adopted to boost the generalization ability of our models. In the third task of DR grading, we build a Vision Transformer and find that the model pre-trained on color fundus images serves as a useful substrate for OCTA images. Extensive ablation studies demonstrate the effectiveness of each designed component in our solutions. The proposed methods rank 4th, 3rd, and 5th on the three leaderboards of DRAC, respectively. Our code is publicly available at https://​github.​com/​FDU-VTS/​DRAC.
Junlin Hou, Fan Xiao, Jilan Xu, Yuejie Zhang, Haidong Zou, Rui Feng
Deep Learning-Based Multi-tasking System for Diabetic Retinopathy in UW-OCTA Images
Abstract
Diabetic retinopathy causes various abnormality in retinal vessels. In addition, Detection and identification of vessel anomaly are challenging due to nature of complexity in retinal vessels. UW-OCTA provides high-resolution image of those vessels to diagnose lesions of vessels. However, the image suffers noise of image. We here propose a deep learning-based multi-tasking systems for DR in UW-OCTA images to deal with diagnosis and checking image quality. We segment three kinds of retinal lesions with data-adaptive U-Net architectures, i.e. nnUNet, grading images on image quality and DR severity grading by soft-voting outputs of fine-tuned multiple convolutional neural networks. For three tasks, we achieve Dice similarity coefficient of 0.5292, quadratic weighted Kappa of 0.7246, and 0.7157 for segmentation, image quality assessment, and grading DR for test set of DRAC2022 challenge. The performance of our proposed approach demonstrates that task-adaptive U-Net planning and soft ensemble of CNNs can provide enhancement of the performance of single baseline models for diagnosis and screening of UW-OCTA images.
Jungrae Cho, Byungeun Shon, Sungmoon Jeong
Semi-supervised Semantic Segmentation Methods for UW-OCTA Diabetic Retinopathy Grade Assessment
Abstract
People with diabetes are more likely to develop diabetic retinopathy (DR) than healthy people. However, DR is the leading cause of blindness. At present, the diagnosis of diabetic retinopathy mainly relies on the experienced clinician to recognize the fine features in color fundus images. This is a time-consuming task. Therefore, in this paper, to promote the development of UW-OCTA DR automatic detection, we propose a novel semi-supervised semantic segmentation method for UW-OCTA DR image grade assessment. This method, first, uses the MAE algorithm to perform semi-supervised pre-training on the UW-OCTA DR grade assessment dataset to mine the supervised information in the UW-OCTA images, thereby alleviating the need for labeled data. Secondly, to more fully mine the lesion features of each region in the UW-OCTA image, this paper constructs a cross-algorithm ensemble DR tissue segmentation algorithm by deploying three algorithms with different visual feature processing strategies. The algorithm contains three sub-algorithms, namely pre-trained MAE, ConvNeXt, and SegFormer. Based on the initials of these three sub-algorithms, the algorithm can be named MCS-DRNet. Finally, we use the MCS-DRNet algorithm as an inspector to check and revise the results of the preliminary evaluation of the DR grade evaluation algorithm. The experimental results show that the mean dice similarity coefficient of MCS-DRNet v1 and v2 are 0.5161 and 0.5544, respectively. The quadratic weighted kappa of the DR grading evaluation is 0.7559. Our code is available at https://​github.​com/​SupCodeTech/​DRAC2022.
Zhuoyi Tan, Hizmawati Madzin, Zeyu Ding
Image Quality Assessment Based on Multi-model Ensemble Class-Imbalance Repair Algorithm for Diabetic Retinopathy UW-OCTA Images
Abstract
In the diagnosis of diabetic retinopathy (DR), ultrawide optical coherence tomography angiography (UW-OCTA) has received extensive attention because it can non-invasively detect the changes of neovascularization in diabetic retinopathy images. However, in clinical application, UW-OCTA digital images will always suffer a variety of distortions due to a variety of uncontrollable factors, and then affect the diagnostic effect of DR. Therefore, screening images with better imaging quality is very crucial to improve the diagnostic efficiency of DR. In this paper, to promote the development of UW-OCTA DR image quality automatic assessment, we propose a multi-model ensemble class-imbalance repair (MMECIR) algorithm for UW-OCTA DR image quality grading assessment. The models integrated with this algorithm are ConvNeXt, EfficientNet v2, and Xception. The experimental results show that the MMECIR algorithm constructed in this paper can be well applied to UW-OCTA diabetic retinopathy image quality grading assessment (the quadratic weighted kappa of this algorithm is 0.6578). Our code is available at https://​github.​com/​SupCodeTech/​DRAC2022.
Zhuoyi Tan, Hizmawati Madzin, Zeyu Ding
An Improved U-Net for Diabetic Retinopathy Segmentation
Abstract
Diabetic retinopathy (DR) is a common diabetic complication that can lead to blindness in severe cases. Ultra-wide (swept source) optical coherence tomography angiography(UW-OCTA) imaging can help ophthalmologists in the diagnosis of DR. Automatic and accurate segmentation of the lesion area is essential in the diagnosis of DR. However, there still remain several challenges for accurately segmenting lesion areas from UW-OCTA: the various lesion locations, diverse morphology and blurred contrast. To solve these problems, in this paper, we propose a novel framework to segment neovascularization(NV), nonperfusion areas(NA) and intraretinal microvascular abnormalities(IMA), which consists of two parts: 1) We respectively input the images of three lesions into three different channels to achieve three different lesions segmentation at the same time; 2) We improve the traditional 2D U-Net by adding the residual module and dilated convolution. We evaluate the proposed method on the Diabetic Retinopathy Analysis Challenge (DRAC) in MICCAI2022. The mean Dice and mean IoU obtained by the method in the test cases are 0.4757 and 0.3538, respectively.
Xin Chen, Yanbin Chen, Chaonan Lin, Lin Pan
A Vision Transformer Based Deep Learning Architecture for Automatic Diagnosis of Diabetic Retinopathy in Optical Coherence Tomography Angiography
Abstract
The diabetic retinopathy (DR) is an eye abnormality that highly causes blindness or affects majority of patients with a history of 15 years of diabetes at least. To diagnose DR, image quality assessment, lesion segmentation, and DR grade classification are required. However, any automatic DR analysis has not been developed yet. Therefore, the challenge DRAC 2022 suggested three tasks; Task 1: Segmentation of Diabetic Retinopathy Lesions; Task 2: Image Quality Assessment; Task 3: Diabetic Retinopathy Grading. These tasks aim to be built robust but adaptable model for automatic DR diagnosis with provided OCT angiography (OCTA) dataset. In this paper, we proposed an automatic DR diagnosis method with deep learning benchmarking, and image processing from OCTA. The proposed method achieved Dice of 0.6046, Cohen kappa of 0.8075, and 0.8902 for each task respectively with the second place ranking in the competition. The code is available at https://​github.​com/​KT-biohealth/​DRAC22.
Sungjin Choi, Bosoung Jeoun, Jaeyoung Anh, Jaehyup Jeong, Yongjin Choi, Dowan Kwon, Unho Kim, Seoyoung Shin
Segmentation, Classification, and Quality Assessment of UW-OCTA Images for the Diagnosis of Diabetic Retinopathy
Abstract
Diabetic Retinopathy (DR) is a severe complication of diabetes that can cause blindness. Although effective treatments exist (notably laser) to slow the progression of the disease and prevent blindness, the best treatment remains prevention through regular check-ups (at least once a year) with an ophthalmologist. Optical Coherence Tomography Angiography (OCTA) allows for the visualization of the retinal vascularization, and the choroid at the microvascular level in great detail. This allows doctors to diagnose DR with more precision. In recent years, algorithms for DR diagnosis have emerged along with the development of deep learning and the improvement of computer hardware. However, these usually focus on retina photography. There are no current methods that can automatically analyze DR using Ultra-Wide OCTA (UW-OCTA). The Diabetic Retinopathy Analysis Challenge 2022 (DRAC22) provides a standardized UW-OCTA dataset to train and test the effectiveness of various algorithms on three tasks: lesions segmentation, quality assessment, and DR grading. In this paper, we will present our solutions for the three tasks of the DRAC22 challenge. The obtained results are promising and have allowed us to position ourselves in the TOP 5 of the segmentation task, the TOP 4 of the quality assessment task, and the TOP 3 of the DR grading task. The code is available at https://​github.​com/​Mostafa-EHD/​Diabetic_​Retinopathy_​OCTA.
Yihao Li, Rachid Zeghlache, Ikram Brahim, Hui Xu, Yubo Tan, Pierre-Henri Conze, Mathieu Lamard, Gwenolé Quellec, Mostafa El Habib Daho
Data Augmentation by Fourier Transformation for Class-Imbalance: Application to Medical Image Quality Assessment
Abstract
Diabetic retinopathy (DR) is a common ocular disease in diabetic patients. In DR analysis, doctors first need to select excellent-quality images of ultra wide optical coherence tomography imaging (UW-OCTA). Only high-quality images can be used for lesion segmentation and proliferative diabetic retinopathy (PDR) detection. In practical applications, UW-OCTA has a small number of images with poor quality, so the dataset constructed from UW-OCTA faces the problem of class-imbalance. In this work, we employ data enhancement strategy and develop a loss function to alleviate class-imbalance. Specifically, we apply Fourier Transformation to the poor quality data with limited numbers, thus expanding this category data. We also utilize characteristics of class-imbalance to improve the cross-entropy loss by weighting. This method is evaluated on DRAC2022 dataset, we achieved Quaratic Weight Kappa of 0.7647 and AUC of 0.8458, respectively.
Zhicheng Wu, Yanbin Chen, Xuru Zhang, Liqin Huang
Automatic Image Quality Assessment and DR Grading Method Based on Convolutional Neural Network
Abstract
Diabetic retinopathy (DR) is a common ocular complication in diabetic patients and is a major cause of blindness in the population. DR often leads to progressive changes in the structure of the vascular system and causes abnormalities. In the process of DR analysis, the image quality needs to be evaluated first, and images with better imaging quality are selected, followed by value-added proliferative diabetic retinopathy (PDR) detection. Therefore, in this paper, the MixNet classification network was first used for image quality assessment (IQA), and then the ResNet50-CMBA network was used for DR grading of images, and both networks were combined with a k-fold cross-validation strategy. We evaluated our method at the 2022 Diabetic Retinopathy Analysis Challenge (DRAC), where image quality was evaluated on 1103 ultra-wide optical coherence tomography angiography (UW-OCTA) images and DR grading was detected on 997 UW-OCTA images. Our method achieved a Quadratic Weight Kappa of 0.7547 and 0.8010 in the test cases, respectively.
Wen Zhang, Hao Chen, Daisong Li, Shaohua Zheng
A Transfer Learning Based Model Ensemble Method for Image Quality Assessment and Diabetic Retinopathy Grading
Abstract
Diabetic retinopathy (DR) is a chronic complication of diabetes that damages the retina and is one of the leading causes of blindness. In the process of diabetic retinopathy analysis, it is necessary to first assess the quality of images and select the images with better imaging quality. Then DR analysis, such as DR grading, is performed. Therefore, it is crucial to implement a flexible and robust method to achieve automatic image quality assessment and DR grading. In deep learning, due to the high complexity, weak individual differences, and noise interference of ultra-wide optical coherence tomography angiography (UW-OCTA) images, individual classification networks have not been able to achieve satisfactory accuracy on such tasks and do not generalize well. Therefore, in this work, we use multiple models ensemble methods, by ensemble different baseline networks of RegNet and EfficientNetV2, which can simply and significantly improve the prediction accuracy and robustness. A transfer learning based solution is proposed for the problem of insufficient diabetic image data for retinopathy. After doing feature enhancement on the images, the UW-OCTA image task will be fine-tuned by combining the network pre-trained with ImageNet data. our method achieves a quadratic weighted kappa of 0.778 and AUC of 0.887 in image quality assessment (IQA) and 0.807 kappa and AUC of 0.875 in diabetic retinopathy grading.
Xiaochao Yan, Zhaopei Li, Jianhui Wen, Lin Pan
Automatic Diabetic Retinopathy Lesion Segmentation in UW-OCTA Images Using Transfer Learning
Abstract
Regular retinal screening and timely treatment are the only ways to avoid vision loss due to Diabetic Retinopathy (DR). However, the insufficiency of ophthalmologists or optometrists makes DR screening and treatment programs challenging for the growing global diabetic population. Computer-aided automatic DR screening and detection systems will be a more sustainable approach to deal with this situation. The Diabetic Retinopathy Analysis Challenge 2022 (DRAC22) in association with the 25th International Conference on Medical Image Computing and Computer Assisted Intervention 2022 (MICCAI 2022) created the opportunity for researchers worldwide to work on automatic DR diagnosis procedure on UW-OCTA images of the retina. As automatic segmentation of different DR lesions is the first and most crucial step in the DR screening procedure, we addressed the task of “Segmentation of Diabetic Retinopathy Lesions” among three different tasks of the challenge. We used the transfer learning technique to automatically segment lesions from the retinal images. The chosen pre-trained deep learning model is trained, validated, and tested on the DRAC22 segmentation dataset. It showed a mean Dice Similarity Coefficient (DSC) of 32.06% and a mean Intersection over Union (IoU) of 22.05% on the test dataset during the challenge submission. Some variations in the training procedure lead the model’s performance to a mean DSC of 43.36 % and a mean IoU of 31.03% on the test dataset during post-challenge submission. The link to the repository of code is: https://github:com/Sufianlab/FS_AS_DRAC22
Farhana Sultana, Abu Sufian, Paramartha Dutta

MIDOG

Frontmatter
Reference Algorithms for the Mitosis Domain Generalization (MIDOG) 2022 Challenge
Abstract
Robust mitosis detection on images from different tumor types, pathology labs, and species is a challenging task that was addressed in the MICCAI Mitosis Domain Generalization (MIDOG) 2022 challenge. In this work, we describe three reference algorithms that were provided as a baseline for the challenge: A Mask-RCNN-based instance segmentation model trained on the MIDOG 2022 dataset, and two different versions of the domain-adversarial RetinaNet which already served as the baseline for MIDOG 2021 challenge, one trained on the MIDOG 2022 dataset and the other trained only on human breast carcinoma from MIDOG 2021. The domain-adversarial RetinaNet trained on the MIDOG 2022 dataset had the highest F\(_1\) score of 0.7135 on the final test set. When trained on breast carcinoma only, the same network had a much lower F\(_1\) score of 0.4719, indicating a significant domain shift between mitotic figure and tissue representation in different tumor types.
Jonas Ammeling, Frauke Wilm, Jonathan Ganz, Katharina Breininger, Marc Aubreville
Radial Prediction Domain Adaption Classifier for the MIDOG 2022 Challenge
Abstract
This paper describes our contribution to the MIDOG 2022 challenge for detecting mitotic cells. One of the major problems to be addressed in the MIDOG 2022 challenge is the robustness under the natural variance that appears for real-life data in the histopathology field. To address the problem, we use an adapted YOLOv5s model for object detection in conjunction with a new Domain Adaption Classifier (DAC) variant, the Radial-Prediction-DAC, to achieve robustness under domain shifts. In addition, we increase the variability of the available training data using stain augmentation in HED color space. Using the suggested method, we obtain a test set F1-score of 0.6658.
Jonas Annuscheit, Christian Krumnow
Detecting Mitoses with a Convolutional Neural Network for MIDOG 2022 Challenge
Abstract
This work presents a mitosis detection method with only one vanilla Convolutional Neural Network (CNN). Our method consists of two steps: given an image, we first apply a CNN using a sliding window technique to extract patches that have mitoses; we then calculate each extracted patch’s class activation map to obtain the mitosis’s precise location. To increase the model performance on high-domain-variance pathology images, we train the CNN with a data augmentation pipeline, a noise-tolerant loss that copes with unlabeled images, and a multi-rounded active learning strategy. In the MIDOG 2022 challenge, our approach, with an EfficientNet-b3 CNN model, achieved an overall F1 score of 0.7323 in the preliminary test phase, and 0.6847 in the final test phase (task 1). Our approach sheds light on the broader applicability of class activation maps for object detections in pathology images.
Hongyan Gu, Mohammad Haeri, Shuo Ni, Christopher Kazu Williams, Neda Zarrin-Khameh, Shino Magaki, Xiang ‘Anthony’ Chen
Tackling Mitosis Domain Generalization in Histopathology Images with Color Normalization
Abstract
In this paper, we propose a method for mitosis detection in histopathology images in an unsupervised domain adaptation setting. Our method is a two-step approach. The first step is color normalization, which is an unsupervised domain adaptation at the input level. In the second step, we use an object detection method for mitosis detection. Using a final test set consisting of patches from whole slide images and containing 100 independent tumor cases across 10 tumor types, we evaluate our method and obtain an F1 score of 0.671.
Satoshi Kondo, Satoshi Kasai, Kousuke Hirasawa
A Deep Learning Based Ensemble Model for Generalized Mitosis Detection in H &E Stained Whole Slide Images
Abstract
Identification of mitotic cells as well as estimation of mitotic index are important parameters in understanding the pathology of cancer, predicting response to chemotherapy and overall survival. This is usually performed manually by pathologists and there can be considerable variability in their assessments. The use of deep learning(DL) models can help in addressing this issue. However, most of the state-of-the-art methods are trained for specific cancer types, and often tend to fail when used across multiple tumor types. Hence there is a clear need for a more ‘pan-tumor’ approach to identifying mitotic figures. We propose a generalized DL model for mitosis detection using the MIDOG-2022 Challenge dataset. Using an ensemble of predictions from a transformer-based object detector and a separate classifier, our model makes final predictions. Our approach achieved an F1-score of 0.7569 and stood second in the MIDOG-2022 challenge. The predictions from the object detector alone achieved an F1-score of 0.7510. Our model generalizes well to address the domain shifts caused by variability in image acquisition, protocols and tumor tissue types.
Sujatha Kotte, VG Saipradeep, Naveen Sivadasan, Thomas Joseph, Hrishikesh Sharma, Vidushi Walia, Binuja Varma, Geetashree Mukherjee
Fine-Grained Hard-Negative Mining: Generalizing Mitosis Detection with a Fifth of the MIDOG 2022 Dataset
Abstract
Making histopathology image classifiers robust to a wide range of real-world variability is a challenging task. Here, we describe a candidate deep learning solution for the Mitosis Domain Generalization Challenge 2022 (MIDOG) to address the problem of generalization for mitosis detection in images of hematoxylin-eosin-stained histology slides under high variability (scanner, tissue type and species variability). Our approach consists in training a rotation-invariant deep learning model using aggressive data augmentation with a training set enriched with hard negative examples and automatically selected negative examples from the unlabeled part of the challenge dataset. To optimize the performance of our models, we investigated a hard negative mining regime search procedure that lead us to train our best model using a subset of image patches representing 19.6% of our training partition of the challenge dataset. Our candidate model ensemble achieved a \(\textrm{F}_{1}\)-score of .697 on the final test set after automated evaluation on the challenge platform, achieving the third best overall score in the MIDOG 2022 Challenge.
Maxime W. Lafarge, Viktor H. Koelzer
Multi-task RetinaNet for Mitosis Detection
Abstract
The count of mitotic cells is a key feature in tumor diagnosis. However, due to the variability of mitotic cell morphology, detecting mitotic cells in tumor tissues is a highly challenging task. At the same time, the performance of the trained models often declines when there is a vast difference between the source domain and the target domain. (i.e., the different tumor types and scanners). Therefore, it is necessary to develop algorithms for detecting mitotic cells with robustness in domain shift scenarios. Our work proposes a foreground detection and tumor classification task based on the baseline (Retinanet) and utilizes data augmentation to improve our model’s detection ability and domain generalization performance. We achieve excellent performance on the challenging preliminary test dataset (F1 score: 0.5809) and the Final test dataset (F1:0.6300).
Ziyue Wang, Yang Chen, Zijie Fang, Hao Bian, Yongbing Zhang
Backmatter
Metadata
Title
Mitosis Domain Generalization and Diabetic Retinopathy Analysis
Editors
Bin Sheng
Marc Aubreville
Copyright Year
2023
Electronic ISBN
978-3-031-33658-4
Print ISBN
978-3-031-33657-7
DOI
https://doi.org/10.1007/978-3-031-33658-4

Premium Partner