Skip to main content

2022 | Buch

Head and Neck Tumor Segmentation and Outcome Prediction

Second Challenge, HECKTOR 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, September 27, 2021, Proceedings

herausgegeben von: Vincent Andrearczyk, Valentin Oreiller, Mathieu Hatt, Adrien Depeursinge

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the Second 3D Head and Neck Tumor Segmentation in PET/CT Challenge, HECKTOR 2021, which was held in conjunction with the 24th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2021. The challenge took place virtually on September 27, 2021, due to the COVID-19 pandemic.

The 29 contributions presented, as well as an overview paper, were carefully reviewed and selected form numerous submissions. This challenge aims to evaluate and compare the current state-of-the-art methods for automatic head and neck tumor segmentation. In the context of this challenge, a dataset of 325 delineated PET/CT images was made available for training.

Inhaltsverzeichnis

Frontmatter
Overview of the HECKTOR Challenge at MICCAI 2021: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT Images
Abstract
This paper presents an overview of the second edition of the HEad and neCK TumOR (HECKTOR) challenge, organized as a satellite event of the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. The challenge is composed of three tasks related to the automatic analysis of PET/CT images for patients with Head and Neck cancer (H&N), focusing on the oropharynx region. Task 1 is the automatic segmentation of H&N primary Gross Tumor Volume (GTVt) in FDG-PET/CT images. Task 2 is the automatic prediction of Progression Free Survival (PFS) from the same FDG-PET/CT. Finally, Task 3 is the same as Task 2 with ground truth GTVt annotations provided to the participants. The data were collected from six centers for a total of 325 images, split into 224 training and 101 testing cases. The interest in the challenge was highlighted by the important participation with 103 registered teams and 448 result submissions. The best methods obtained a Dice Similarity Coefficient (DSC) of 0.7591 in the first task, and a Concordance index (C-index) of 0.7196 and 0.6978 in Tasks 2 and 3, respectively. In all tasks, simplicity of the approach was found to be key to ensure generalization performance. The comparison of the PFS prediction performance in Tasks 2 and 3 suggests that providing the GTVt contour was not crucial to achieve best results, which indicates that fully automatic methods can be used. This potentially obviates the need for GTVt contouring, opening avenues for reproducible and large scale radiomics studies including thousands potential subjects.
Vincent Andrearczyk, Valentin Oreiller, Sarah Boughdad, Catherine Cheze Le Rest, Hesham Elhalawani, Mario Jreige, John O. Prior, Martin Vallières, Dimitris Visvikis, Mathieu Hatt, Adrien Depeursinge
CCUT-Net: Pixel-Wise Global Context Channel Attention UT-Net for Head and Neck Tumor Segmentation
Abstract
Automatic segmentation of head and neck (H&N) primary tumors in FDG-PET/CT images is significant for the treatment of cancer. In this paper, the pixel-wise global context channel attention U-shaped transformer net (CCUT-Net) was proposed by using the long-range relational information, global context information, and channel information to improve the robustness and effectiveness of tumor segmentation. First, we used the convolutional neural network (CNN) and transformer fusion encoder to obtain the feature image, which not only captured the remote dependency of the image but also reduced the impact of small datasets on the performance of the transformer. Meanwhile, this was the first time to apply the fusion of CNN and transformer to the segmentation of H&N tumors in FDG-PET/CT images. Furthermore, we proposed the pixel-wise global context channel attention module in the decoder that combined the global context information and channel information of the image. It not only considered the overall information of the image but also paid attention to the FDG-PET and CT channel information, using the advantages of the two modes to accurately localize the position and segment the boundary of the tumor. Finally, in the encoder and decoder, we applied squeeze and excitation (SE) normalization to speed up the model training and promote model convergence. We evaluated our model on the test dataset of the head and neck tumor challenge with a final dice similarity coefficient (DSC) of 0.763 and a hausdorff distance-95% (HD95) of 3.270, which showed that our method was robust in tumor segmentation. (Team name:wangjiao)
Jiao Wang, Yanjun Peng, Yanfei Guo, Dapeng Li, Jindong Sun
A Coarse-to-Fine Framework for Head and Neck Tumor Segmentation in CT and PET Images
Abstract
Radiomics analysis can help patients suffered from head and neck (H&N) cancer customize tailoring treatments. It requires a large number of segmentation of the H&N tumor area in PET and CT images. However, the cost of manual segmentation is extremely high. In this paper, we propose a coarse-to-fine framework to segment the H&N tumor automatically in FluoroDeoxyGlucose (FDG)-Positron Emission Tomography (PET) and Computed Tomography (CT) images. Specifically, we trained three 3D-UNets with residual blocks to make coarse stage, fine stage and refined stage predictions respectively. Experiments show that such a training framework can improve the segmentation quality step by step. We evaluated our framework with Dice Similarity Coefficient (DSC) and Hausdorff Distance at 95% (HD95) of 0.7733 and 3.0882 respectively in the task 1 of the HEad and neCK TumOR segmentation and outcome prediction in PET/CT images (HECKTOR2021) Challenge and ranked second.
Chengyang An, Huai Chen, Lisheng Wang
Automatic Segmentation of Head and Neck (H&N) Primary Tumors in PET and CT Images Using 3D-Inception-ResNet Model
Abstract
In many computer vision areas, deep learning-based models achieved state-of-the-art performances and started to catch the attention in the context of medical imaging. The emergence of deep learning is significantly cutting-edge the state-of-the-art in medical image segmentation. For generalization and better optimization of current deep learning models for head and neck segmentation problems, head and neck tumor segmentation and outcome prediction in PET/CT images (HECKTOR21) challenge offered the opportunity to participants to develop automatic bi-modal approaches for the 3D segmentation of Head and Neck (H&N) tumors in PET/CT scans, focusing on oropharyngeal cancers. In this paper, a 3D Inception ResNet-based deep learning model (3D-Inception-ResNet) for head and neck tumor segmentation has been proposed. The 3D-Inception module has been introduced at the encoder side and the 3D ResNet module has been proposed at the decoder side. The 3D squeeze and excitation (SE) module is also inserted in each encoder block of the proposed model. The 3D depth-wise convolutional layer has been used in 3D inception and 3D ResNet module to get the optimized performance. The proposed model produced optimal Dice coefficients (DC) and HD95 scores and could be useful for the segmentation of head and neck tumors in PET/CT images. The code is publicly available (https://​github.​com/​RespectKnowledge​/​HeadandNeck21_​3D_​Segmentation).
Abdul Qayyum, Abdesslam Benzinou, Moona Mazher, Mohamed Abdel-Nasser, Domenec Puig
The Head and Neck Tumor Segmentation in PET/CT Based on Multi-channel Attention Network
Abstract
Automatic segmentation of head and neck (H&N) tumors plays an important and challenging role in clinical practice and radiomics researchers. In this paper, we developed an automated tumor segmentation method based on combined positron emission tomography/computed tomography (PET/CT) images provided by the MICCAI 2021 Head and Neck Tumor (HECKTOR) Segmentation Challenge. Our model takes 3D U-Net as the backbone architecture, on which residual network is added. In addition, we proposed a multi-channel attention network (MCA-Net), which fuses the information of different receptive fields and gives different weights to each channel to better capture image detail information. In the end, our network scored well on the test set (DSC 0.7681, HD95 3.1549) (id: siat).
Guoshuai Wang, Zhengyong Huang, Hao Shen, Zhanli Hu
Multimodal Spatial Attention Network for Automatic Head and Neck Tumor Segmentation in FDG-PET and CT Images
Abstract
Quantitative positron emission tomography/computed tomography (PET/CT), owing to the functional metabolic information and anatomical information of the human body that it presents, is useful to achieve accurate tumor delineation. However, manual annotation of a Volume Of Interest (VOI) is a labor-intensive and time-consuming task. In this study, we automatically segmented the Head and Neck (H&N) primary tumor in combined PET/CT images. Herein, we propose a convolutional neural network named Multimodal Spatial Attention Network (MSA-Net), supplemented with a Spatial Attention Module (SAM), which uses a PET image as an input. We evaluated this model on the MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation challenge dataset. Our method delivered a competitive cross-validation efficiency, with Dice Similarity Coefficient (DSC) 0.757, precision 0.788, and recall 0.785. When we tested out method on test dataset, we achieved an average DSC and Hausdorff Distance at 95% (HD95) of 0.766 and 3.155 respectively. Our team name is ‘Heck_Uihak’.
Minjeong Cho, Yujin Choi, Donghwi Hwang, Si Young Yie, Hanvit Kim, Jae Sung Lee
PET Normalizations to Improve Deep Learning Auto-Segmentation of Head and Neck Tumors in 3D PET/CT
Abstract
Auto-segmentation of head and neck cancer (HNC) primary gross tumor volume (GTVt) is a necessary but challenging process for radiotherapy treatment planning and radiomics studies. The HEad and neCK TumOR Segmentation Challenge (HECKTOR) 2021 comprises two major tasks: auto-segmentation of GTVt in FDG-PET/CT images and the prediction of patient outcomes. In this paper, we focus on the segmentation part by proposing two PET normalization methods to mitigate impacts from intensity variances between PET scans for deep learning-based GTVt auto-segmentation. We also compared the performance of three popular hybrid loss functions. An ensemble of our proposed models achieved an average Dice Similarity Coefficient (DSC) of 0.779 and median 95% Hausdorff Distance (HD95) of 3.15 mm on the test set. Team: Aarhus_Oslo.
Jintao Ren, Bao-Ngoc Huynh, Aurora Rosvoll Groendahl, Oliver Tomic, Cecilia Marie Futsaether, Stine Sofia Korreman
The Head and Neck Tumor Segmentation Based on 3D U-Net
Abstract
Head and neck cancer is one of the common malignancies. Radiation therapy is primary treatment of this type of cancer. Mapping the target area of the head and neck tumor is the key step to make the appropriate radiotherapy schedule. However, it is a very time consuming and boring work. Therefore, automatic segmenting the head and neck tumor is of the very significant work. This paper adopts the U-Net network used in medical image segmentation commonly to carry out the automatic segmentation to head and neck tumors based on the dual-modality PET-CT images. The 5-fold cross validation experiments are carried out. The average experimental results are 0.764, 7.467, 0.839, and 0.797 in terms of Dice score, HD95, recall, and precision, respectively. The mean of Dice and the median of HD95 on the test set are 0.778 and 3.088, respectively.
Juanying Xie, Ying Peng
3D U-Net Applied to Simple Attention Module for Head and Neck Tumor Segmentation in PET and CT Images
Abstract
Accurate and prompt automatic segmentation of medical images is crucial to clinical surgery and radiotherapy. In this paper, we proposed a new automatic tumor segmentation method for the MICCAI 2021 Head and Neck Tumor segmentation challenge (HECKTOR), using positron emission tomography/computed tomography (PET/CT) images. The main structure of our model is a 3D U-net network with SimAM attention module, which takes advantage of the energy function to give unique weights to disparate pixels. This module enabled us to effectively extract the feature of medical images without adding additional parameters. In the test set consisting of 101 patients, our team, ‘C235’, obtained result with the Dice Similarity coefficient (DSC) of 0.756 and Hausdorf Distance at 95% (HD95) 3.269. The full implementation based on PyTorch and the trained models are available at https://​github.​com/​TravisL24/​HECKTOR
Tao Liu, Yixin Su, Jiabao Zhang, Tianqi Wei, Zhiyong Xiao
Skip-SCSE Multi-scale Attention and Co-learning Method for Oropharyngeal Tumor Segmentation on Multi-modal PET-CT Images
Abstract
One of the primary treatment options for head and neck cancer is (chemo)radiation. Accurate delineation of the contour of the tumors is of great importance in the successful treatment of the tumor and in the prediction of patient outcomes. With this paper we take part in the HECKTOR 2021 challenge and we propose our methods for automatic tumor segmentation on PET and CT images of oropharyngeal cancer patients. To achieve this goal, we investigated different deep learning methods with the purpose of highlighting relevant image and modality related features, to refine the contour of the primary tumor. More specifically, we tested a Co-learning method [1] and a 3D Skip Spatial and Channel Squeeze and Excitation Multi-Scale Attention method (Skip-scSE-M), on the challenge dataset. The best results achieved on the test set were 0.762 mean Dice Similarity Score and 3.143 median of the Hausdorf Distance at 95\(\%\).
Alessia De Biase, Wei Tang, Nikos Sourlos, Baoqiang Ma, Jiapan Guo, Nanna Maria Sijtsema, Peter van Ooijen
Head and Neck Cancer Primary Tumor Auto Segmentation Using Model Ensembling of Deep Learning in PET/CT Images
Abstract
Auto-segmentation of primary tumors in oropharyngeal cancer using PET/CT images is an unmet need that has the potential to improve radiation oncology workflows. In this study, we develop a series of deep learning models based on a 3D Residual Unet (ResUnet) architecture that can segment oropharyngeal tumors with high performance as demonstrated through internal and external validation of large-scale datasets (training size = 224 patients, testing size = 101 patients) as part of the 2021 HECKTOR Challenge. Specifically, we leverage ResUNet models with either 256 or 512 bottleneck layer channels that demonstrate internal validation (10-fold cross-validation) mean Dice similarity coefficient (DSC) up to 0.771 and median 95% Hausdorff distance (95% HD) as low as 2.919 mm. We employ label fusion ensemble approaches, including Simultaneous Truth and Performance Level Estimation (STAPLE) and a voxel-level threshold approach based on majority voting (AVERAGE), to generate consensus segmentations on the test data by combining the segmentations produced through different trained cross-validation models. We demonstrate that our best performing ensembling approach (256 channels AVERAGE) achieves a mean DSC of 0.770 and median 95% HD of 3.143 mm through independent external validation on the test set. Our DSC and 95% HD test results are within 0.01 and 0.06 mm of the top ranked model in the competition, respectively. Concordance of internal and external validation results suggests our models are robust and can generalize well to unseen PET/CT data. We advocate that ResUNet models coupled to label fusion ensembling approaches are promising candidates for PET/CT oropharyngeal primary tumors auto-segmentation. Future investigations should target the ideal combination of channel combinations and label fusion strategies to maximize segmentation performance.
Mohamed A. Naser, Kareem A. Wahid, Lisanne V. van Dijk, Renjie He, Moamen Abobakr Abdelaal, Cem Dede, Abdallah S. R. Mohamed, Clifton D. Fuller
Priori and Posteriori Attention for Generalizing Head and Neck Tumors Segmentation
Abstract
Head and neck cancer is one of the most common cancers in the world. The automatic segmentation of head and neck tumors with the help of computer is of great significance for treatment. In the context of the MICCAI 2021 HEad and neCK tumOR (HECKTOR) segmentation challenge, we propose a combination of a priori and a posteriori attention to segment tumor regions from PET/CT images. Specifically, 1) According to the imaging characteristics of PET, we use the normalized PET as an attention map to emphasize the tumor area on CT as a priori attention. 2) We add channel attention to the model as a posteriori attention. 3) For the test set contains unseen domains, we use Mixup to mix the PET and CT in the train set to simulate unseen domains and enhance the generalization of the network. Our results on the test set are produced with the use of an ensemble of multiple models, and our method ranked third place in the MICCAI 2021 HECKTOR challenge with DSC is 0.7735 and HD95 is 3.0882.
Jiangshan Lu, Wenhui Lei, Ran Gu, Guotai Wang
Head and Neck Tumor Segmentation with Deeply-Supervised 3D UNet and Progression-Free Survival Prediction with Linear Model
Abstract
Accurate segmentation of Head and Neck (H&N) tumor has important clinical relevance in disease characterization, thereby holding a strong potential for better cancer treatment planning and optimized patient care. In recent times, the development in deep learning-based models has been able to effectively and accurately perform medical images segmentation task that eliminates the problems associated with manual annotation of region of interest (ROI) such as significant human efforts and inter-observer variability. For H&N tumors, FDG-PET and CT carry complementary information of metabolic and structural details of tumor and the fusion of those modalities were explored in this study, which led to significant enhancement in performance level with combined data. Furthermore, deep supervision technique was applied to the segmentation network, where the computation of loss occurs at multiple layers that allows for gradients to be injected deeper into the network and facilitates the training. Our proposed segmentation methods yield promising result, with a Dice Similarity Coefficient (DSC) of 0.731 in our cross-validation experiment. Finally, we developed a linear model for progression-free survival prediction using extracted imaging and non-imaging features.
Kanchan Ghimire, Quan Chen, Xue Feng
Deep Learning Based GTV Delineation and Progression Free Survival Risk Score Prediction for Head and Neck Cancer Patients
Abstract
Head and neck cancer patients can experience significant side effects from therapy. Accurate risk stratification allows for proper determination of therapeutic dose and minimization of therapy induced damage to healthy tissue. Radiomics models have proven their power for detection of useful tumors characteristics that can be used for patient prognosis. We studied the ability of deep learning models for segmentation of gross tumor volumes (GTV) and prediction of a risk score for progression free survival based on positron emission tomography/computed tomography (PET/CT) images. A 3D Unet-like architecture was trained for segmentation and achieved a Dice similarity score of 0.705 on the test set. A transfer learning approach based on video clip data, allowing for full utilization of 3 dimensional information in medical imaging data was used for prediction of a tumor progression free survival score. Our approach was able to predict progression risk with a concordance index of 0.668 on the test data. For clinical application further studies involving a larger patient cohort are needed.
Daniel M. Lang, Jan C. Peeken, Stephanie E. Combs, Jan J. Wilkens, Stefan Bartzsch
Multi-task Deep Learning for Joint Tumor Segmentation and Outcome Prediction in Head and Neck Cancer
Abstract
Head and Neck (H&N) cancers are among the most common cancers worldwide. Early outcome prediction is particularly important for H&N cancers in clinical practice. If prognostic information can be provided to treatment planning at the earliest stage, the patient’s 5-year survival rate can be significantly improved. However, traditional radiomics methods for outcome prediction are limited to a prior skillset in hand-crafting image features and also limited by its reliance on manual segmentation of primary tumors, which is intractable and error-prone. Multi-task learning is a potential approach to realize outcome prediction and tumor segmentation in a unified model. In this study, we propose a multi-task deep model for joint tumor segmentation and outcome prediction in H&N cancer using positron emission tomography/computed tomography (PET/CT) images, in the context of MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge. Our model is a multi-task neural network that simultaneously predicts the risk of disease progression and delineates the primary tumors using PET/CT images in an end-to-end manner. Our model was evaluated for outcome prediction and tumor segmentation in the cross-validation of training set (C-index: 0.742; DSC: 0.728) and on the testing set (C-index: 0.671; DSC: 0.745).
Mingyuan Meng, Yige Peng, Lei Bi, Jinman Kim
PET/CT Head and Neck Tumor Segmentation and Progression Free Survival Prediction Using Deep and Machine Learning Techniques
Abstract
Three 2D CNN (Convolutional Neuronal Networks) models, one for each patient-plane (axial, sagittal and coronal) plus two 3D CNN models were ensemble using two 3D models for Head and Neck tumor segmentation in CT and FDG-PET images. A Progression Free Survival (PFS) prediction, based on a Gaussian Process Regression (GPR) model was design on Matlab. Radiomic features such as Haralick textures, geometrical and statistical data were extracted from the automatic segmentation. A 35-feature selection process was performed over 1000 different features. An anti-concordant prediction was made based on the Kaplan-Meier estimator.
Alfonso Martinez-Larraz, Jaime Martí Asenjo, Beatriz Álvarez Rodríguez
Automatic Head and Neck Tumor Segmentation and Progression Free Survival Analysis on PET/CT Images
Abstract
Automatic segmentation is an essential but challenging step for extracting quantitative imaging bio-markers for characterizing head and neck tumor in tumor detection, diagnosis, prognosis, treatment planning and assessment. The HEad and neCK TumOR Segmentation Challenge 2021 (HECKTOR 2021) provides a common platform for the following three tasks: 1) the automatic segmentation of the primary gross target volume (GTV) in the oropharynx region on FDG-PET and CT images; 2) the prediction of patient outcomes, namely Progression Free Survival (PFS) from the FDG-PET/CT images with automatic segmentation results and the available clinical data; and 3) the prediction of PFS with ground truth annotations. We participated in the first two tasks by further evaluating a fully automatic segmentation network based on encoder-decoder architecture. In order to better integrate information across different scales, we proposed a dynamic scale attention mechanism that incorporates low-level details with high-level semantics from feature maps at different scales. Radiomic features were extracted from the segmented tumors and used for PFS prediction. Our segmentation framework was trained using the 224 challenge training cases provided by HECKTOR 2021, and achieved an average Dice Similarity Coefficient (DSC) of 0.7693 with cross validation. By testing on the 101 testing cases, our model achieved an average DSC of 0.7608 and \(95\%\) Hausdorff distance of 3.27 mm. The overall PFS prediction yielded a concordance index (c-index) of 0.53 on the testing dataset (id: deepX).
Yading Yuan, Saba Adabi, Xuefeng Wang
Multimodal PET/CT Tumour Segmentation and Prediction of Progression-Free Survival Using a Full-Scale UNet with Attention
Abstract
Segmentation of head and neck (H&N) tumours and prediction of patient outcome are crucial for patient’s disease diagnosis and treatment monitoring. Current developments of robust deep learning models are hindered by the lack of large multi-centre, multi-modal data with quality annotations. The MICCAI 2021 HEad and neCK TumOR (HECKTOR) segmentation and outcome prediction challenge creates a platform for comparing segmentation methods of the primary gross target volume on fluoro-deoxyglucose (FDG)-PET and Computed Tomography images and prediction of progression-free survival in H&N oropharyngeal cancer. For the segmentation task, we proposed a new network based on an encoder-decoder architecture with full inter- and intra-skip connections to take advantage of low-level and high-level semantics at full scales. Additionally, we used Conditional Random Fields as a post-processing step to refine the predicted segmentation maps. We trained multiple neural networks for tumor volume segmentation, and these segmentations were ensembled achieving an average Dice Similarity Coefficient of 0.75 in cross-validation, and 0.76 on the challenge testing data set. For prediction of patient progression free survival task, we propose a Cox proportional hazard regression combining clinical, radiomic, and deep learning features. Our survival prediction model achieved a concordance index of 0.82 in cross-validation, and 0.62 on the challenge testing data set.
Emmanuelle Bourigault, Daniel R. McGowan, Abolfazl Mehranian, Bartłomiej W. Papież
Advanced Automatic Segmentation of Tumors and Survival Prediction in Head and Neck Cancer
Abstract
In this study, 325 subjects were extracted from the HECKTOR-Challenge. 224 subjects were considered in the training procedure, and 101 subjects were employed to validate our models. Positron emission tomography (PET) images were registered to computed tomography (CT) images, enhanced, and cropped. First, 10 fusion techniques were utilized to combine PET and CT information. We also utilized 3D-UNETR (UNET with Transformers) and 3D-UNET to automatically segment head and neck squamous cell carcinoma (HNSCC) tumors and then extracted 215 radiomics features from each region of interest via our Standardized Environment for Radiomics Analysis (SERA) radiomics package. Subsequently, we employed multiple hybrid machine learning systems) HMLS), including 13 dimensionality reduction algorithms and 15 feature selection algorithms linked with 8 survival prediction algorithms, optimized by 5-fold cross-validation, applied to PET only, CT only and 10 fused datasets. We also employed Ensemble Voting for the prediction task. Test dice scores and test c-indices were reported to compare models. For segmentation, the highest dice score of 0.680 was achieved by the Laplacian-pyramid fusion technique linked with 3D-UNET. The highest c-index of 0.680 was obtained via the Ensemble Voting technique for survival prediction. We demonstrated that employing fusion techniques followed by appropriate automatic segmentation technique results in a good performance. Moreover, utilizing the Ensemble Voting technique enabled us to achieve the highest performance.
Mohammad R. Salmanpour, Ghasem Hajianfar, Seyed Masoud Rezaeijo, Mohammad Ghaemi, Arman Rahmim
Fusion-Based Head and Neck Tumor Segmentation and Survival Prediction Using Robust Deep Learning Techniques and Advanced Hybrid Machine Learning Systems
Abstract
Multi-level multi-modality fusion radiomics is a promising technique with potential for improved prognostication and segmentation of cancer. This study aims to employ advanced fusion techniques, deep learning segmentation methods, and survival analysis to automatically segment tumor and predict survival outcome in head-and-neck-squamous-cell-carcinoma (HNSCC) cancer. 325 patients with HNSCC cancer were extracted from HECTOR Challenge. 224 patients were used for training and 101 patients were employed to finally validate models. 5 fusion techniques were utilized to combine PET and CT information. The rigid registration technique was employed to register PET images to their CT image. We employed 3D-UNet architecture and SegResNet (segmentation using autoencoder regularization) to improve segmentation performance. Radiomics features were extracted from each region of interest (ROI) via the standardized SERA package, applying to Hybrid Machine Learning Systems (HMLS) including 7 dimensionality reduction algorithms followed by 5 survival prediction algorithms. Dice score and c-Index were reported to compare models in segmentation and prediction tasks respectively. For segmentation task, we achieved dice score around 0.63 using LP-SR Mixture fusion technique (the mixture of Laplacian Pyramid (LP) and Sparse Representation (SR) fusion techniques) followed by 3D-UNET. Next that, employing LP-SR Mixture linked with GlmBoost (Gradient Boosting with Component-wise Linear Models) technique enables an improvement of c-Index ~0.66. This effort indicates that employing appropriate fusion techniques and deep learning techniques results in the highest performance in segmentation task. In addition, the usage of fusion techniques effectively improves survival prediction performance.
Mehdi Fatan, Mahdi Hosseinzadeh, Dariush Askari, Hossein Sheikhi, Seyed Masoud Rezaeijo, Mohammad R. Salmanpour
Head and Neck Primary Tumor Segmentation Using Deep Neural Networks and Adaptive Ensembling
Abstract
The ability to accurately diagnose and analyze head and neck (H&N) tumors in head and neck cancer (HNC) is critical in the administration of patient specific radiation therapy treatment and predicting patient survivability outcome using radiomics. An automated segmentation method for H&N tumors would greatly assist in optimizing personalized patient treatment plans and allow for accurate feature extraction, via radiomics or other means, to predict patient prognosis. In this work, a three-dimensional UNET network was trained to segment H&N primary tumors using a framework based on nnUNET. Multimodal positron emission tomography (PET) and computed tomography (CT) data from 224 subjects were used for model training. Survival forest models were applied to patient clinical data features in conjunction with features extracted from the segmentation maps to predict risk scores for time to progression events for every patient. The selected segmentation methods demonstrated excellent performance with an average DSC score of 0.78 and 95% Hausdorff distance of 3.14. The random forest model achieved a C-index of 0.66 for predicting the Progression Free Survival (PFS) endpoint.
Gowtham Krishnan Murugesan, Eric Brunner, Diana McCrumb, Jithendra Kumar, Jeff VanOss, Stephen Moore, Anderson Peck, Anthony Chang
Segmentation and Risk Score Prediction of Head and Neck Cancers in PET/CT Volumes with 3D U-Net and Cox Proportional Hazard Neural Networks
Abstract
We utilized a 3D nnU-Net model with residual layers supplemented by squeeze and excitation (SE) normalization for tumor segmentation from PET/CT images provided by the Head and Neck Tumor segmentation challenge (HECKTOR). Our proposed loss function incorporates the Unified Focal and Mumford-Shah losses to take the advantage of distribution, region, and boundary-based loss functions. The results of leave-one-out-center-cross-validation performed on different centers showed a segmentation performance of 0.82 average Dice score (DSC) and 3.16 median Hausdorff Distance (HD), and our results on the test set achieved 0.77 DSC and 3.01 HD. Following lesion segmentation, we proposed training a case-control proportional hazard Cox model with an MLP neural net backbone to predict the hazard risk score for each discrete lesion. This hazard risk prediction model (CoxCC) was to be trained on a number of PET/CT radiomic features extracted from the segmented lesions, patient and lesion demographics, and encoder features provided from the penultimate layer of a multi-input 2D PET/CT convolutional neural network tasked with predicting time-to-event for each lesion. A 10-fold cross-validated CoxCC model resulted in a c-index validation score of 0.89, and a c-index score of 0.61 on the HECKTOR challenge test dataset.
Fereshteh Yousefirizi, Ian Janzen, Natalia Dubljevic, Yueh-En Liu, Chloe Hill, Calum MacAulay, Arman Rahmim
Dual-Path Connected CNN for Tumor Segmentation of Combined PET-CT Images and Application to Survival Risk Prediction
Abstract
Automated segmentation methods for image segmentation have the potential to support diagnosis and prognosis using medical images in clinical practice. To achieve the goal of HEad and neCK tumOR (HECKTOR) segmentation and outcome survival prediction in PET/CT images in the MICCAI 2021 challenge, we proposed a novel framework to segment head and neck tumors by leveraging multi-modal imaging using a cross-attention module based on a dual-path and ensemble modeling with majority voting. In addition, we expanded our task to survival analysis using a random forest survival model to predict the prognosis of tumors using clinical information and segmented tumor volume. Our segmentation model achieved a Dice coefficient and Hausdorff distance of 0.7367 and 3.2700, respectively. Our survival model showed a concordance index (C-index) of 0.6459.
Jiyeon Lee, Jimin Kang, Emily Yunha Shin, Regina E. Y. Kim, Minho Lee
Deep Supervoxel Segmentation for Survival Analysis in Head and Neck Cancer Patients
Abstract
Risk assessment techniques, in particular Survival Analysis, are crucial to provide personalised treatment to Head and Neck (H&N) cancer patients. These techniques usually rely on accurate segmentation of the Gross Tumour Volume (GTV) region in Computed Tomography (CT) and Positron Emission Tomography (PET) images . This is a challenging task due to the low contrast in CT and lack of anatomical information in PET. Recent approaches based on Convolutional Neural Networks (CNNs) have demonstrated automatic 3D segmentation of the GTV, albeit with high memory footprints (\({\ge }10\) GB/epoch). In this work, we propose an efficient solution (\({\sim }3\) GB/epoch) for the segmentation task in the HECKTOR 2021 challenge. We achieve this by combining the Simple Linear Iterative Clustering (SLIC) algorithm with Graph Convolution Networks to segment the GTV, resulting in a Dice score of 0.63 on the challenge test set. Furthermore, we demonstrate how shape descriptors of the resulting segmentations are relevant covariates in the Weibull Accelerated Failure Time model, which results in a Concordance Index of 0.59 for task 2 in the HECKTOR 2021 challenge.
Ángel Víctor Juanco-Müller, João F. C. Mota, Keith Goatman, Corné Hoogendoorn
A Hybrid Radiomics Approach to Modeling Progression-Free Survival in Head and Neck Cancers
Abstract
We present our contribution to the HECKTOR 2021 challenge. We created a Survival Random Forest model based on clinical features, and a few radiomics features that have been extracted with and without using the given tumor masks, for Task 3 and Task 2 of the challenge, respectively. To decide on which radiomics features to include into the model, we proceeded both to automatic feature selection, using several established methods, and to literature review of radiomics approaches for similar tasks. Our best performing model includes one feature selected from the literature (Metabolic Tumor Volume derived from the FDG-PET image), one via stability selection (Inverse Variance of the Gray Level Co-occurrence Matrix of the CT image), and one selected via permutation-based feature importance (Tumor Sphericity). This hybrid approach turns-out to be more robust to overfitting than models based on automatic feature selection. We also show that simple ROI definition for the radiomics features, derived by thresholding the Standard Uptake Value in the FDG-PET images, outperforms the given expert tumor delineation in our case.
Sebastian Starke, Dominik Thalmeier, Peter Steinbach, Marie Piraud
An Ensemble Approach for Patient Prognosis of Head and Neck Tumor Using Multimodal Data
Abstract
Accurate prognosis of a tumor can help doctors provide a proper course of treatment and, therefore, save the lives of many . Traditional machine learning algorithms have been eminently useful in crafting prognostic models in the last few decades. Recently, deep learning algorithms have shown significant improvement when developing diagnosis and prognosis solutions to different healthcare problems. However, most of these solutions rely solely on either imaging or clinical data. Utilizing patient tabular data such as demographics and patient medical history alongside imaging data in a multimodal approach to solve a prognosis task has started to gain more interest recently and has the potential to create more accurate solutions. The main issue when using clinical and imaging data to train a deep learning model is to decide on how to combine the information from these sources. We propose a multimodal network that ensembles deep multi-task logistic regression (MTLR), Cox proportional hazard (CoxPH) and CNN models to predict prognostic outcomes for patients with head and neck tumors using patients’ clinical and imaging (CT and PET) data. Features from CT and PET scans are fused and then combined with patients’ electronic health records for the prediction. The proposed model is trained and tested on 224 and 101 patient records respectively. Experimental results show that our proposed ensemble solution achieves a C-index of 0.72 on The HECKTOR test set that saved us the first place in prognosis task of the HECKTOR challenge. The full implementation based on PyTorch is available on https://​github.​com/​numanai/​BioMedIA-Hecktor2021.
Numan Saeed, Roba Al Majzoub, Ikboljon Sobirov, Mohammad Yaqub
Progression Free Survival Prediction for Head and Neck Cancer Using Deep Learning Based on Clinical and PET/CT Imaging Data
Abstract
Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N = 224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N = 101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694, placing 2nd in the competition. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.
Mohamed A. Naser, Kareem A. Wahid, Abdallah S. R. Mohamed, Moamen Abobakr Abdelaal, Renjie He, Cem Dede, Lisanne V. van Dijk, Clifton D. Fuller
Combining Tumor Segmentation Masks with PET/CT Images and Clinical Data in a Deep Learning Framework for Improved Prognostic Prediction in Head and Neck Squamous Cell Carcinoma
Abstract
PET/CT images provide a rich data source for clinical prediction models in head and neck squamous cell carcinoma (HNSCC). Deep learning models often use images in an end-to-end fashion with clinical data or no additional input for predictions. However, in the context of HNSCC, the tumor region of interest may be an informative prior in the generation of improved prediction performance. In this study, we utilize a deep learning framework based on a DenseNet architecture to combine PET images, CT images, primary tumor segmentation masks, and clinical data as separate channels to predict progression-free survival (PFS) in days for HNSCC patients. Through internal validation (10-fold cross-validation) based on a large set of training data provided by the 2021 HECKTOR Challenge, we achieve a mean C-index of 0.855 ± 0.060 and 0.650 ± 0.074 when observed events are and are not included in the C-index calculation, respectively. Ensemble approaches applied to cross-validation folds yield C-index values up to 0.698 in the independent test set (external validation), leading to a 1st place ranking on the competition leaderboard. Importantly, the value of the added segmentation mask is underscored in both internal and external validation by an improvement of the C-index when compared to models that do not utilize the segmentation mask. These promising results highlight the utility of including segmentation masks as additional input channels in deep learning pipelines for clinical outcome prediction in HNSCC.
Kareem A. Wahid, Renjie He, Cem Dede, Abdallah S. R. Mohamed, Moamen Abobakr Abdelaal, Lisanne V. van Dijk, Clifton D. Fuller, Mohamed A. Naser
Self-supervised Multi-modality Image Feature Extraction for the Progression Free Survival Prediction in Head and Neck Cancer
Abstract
Long-term survival of oropharyngeal squamous cell carcinoma patients (OPSCC) is quite poor. Accurate prediction of Progression Free Survival (PFS) before treatment could make identification of high-risk patients before treatment feasible which makes it possible to intensify or de-intensify treatments for high- or low-risk patients. In this work, we proposed a deep learning based pipeline for PFS prediction. The proposed pipeline consists of three parts. Firstly, we utilize the pyramid autoencoder for image feature extraction from both CT and PET scans. Secondly, the feed forward feature selection method is used to remove the redundant features from the extracted image features as well as clinical features. Finally, we feed all selected features to a DeepSurv model for survival analysis that outputs the risk score on PFS on individual patients. The whole pipeline was trained on 224 OPSCC patients. We have achieved a average C-index of 0.7806 and 0.7967 on the independent validation set for task 2 and task 3. The C-indices achieved on the test set are 0.6445 and 0.6373, respectively. It is demonstrated that our proposed approach has the potential for PFS prediction and possibly for other survival endpoints.
Baoqiang Ma, Jiapan Guo, Alessia De Biase, Nikos Sourlos, Wei Tang, Peter van Ooijen, Stefan Both, Nanna Maria Sijtsema
Comparing Deep Learning and Conventional Machine Learning for Outcome Prediction of Head and Neck Cancer in PET/CT
Abstract
Prediction of cancer treatment outcomes based on baseline patient characteristics is a challenging but necessary step towards more personalized treatments with the aim of increased survival and quality of life. The HEad and neCK TumOR Segmentation Challenge (HECKTOR) 2021 comprises two major tasks: auto-segmentation of GTVt in FDG-PET/CT images and outcome prediction for oropharyngeal head and neck cancer patients. The present study compared a deep learning regressor utilizing PET/CT images to conventional machine learning methods using clinical factors and radiomics features for the patient outcome prediction task. With a concordance index of 0.64, the conventional machine learning approach trained on clinical factors had the best test performance. Team: Aarhus_Oslo
Bao-Ngoc Huynh, Jintao Ren, Aurora Rosvoll Groendahl, Oliver Tomic, Stine Sofia Korreman, Cecilia Marie Futsaether
Backmatter
Metadaten
Titel
Head and Neck Tumor Segmentation and Outcome Prediction
herausgegeben von
Vincent Andrearczyk
Valentin Oreiller
Mathieu Hatt
Adrien Depeursinge
Copyright-Jahr
2022
Electronic ISBN
978-3-030-98253-9
Print ISBN
978-3-030-98252-2
DOI
https://doi.org/10.1007/978-3-030-98253-9