Skip to main content
Erschienen in:
Buchtitelbild

2020 | OriginalPaper | Buchkapitel

Attention, Suggestion and Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Despite the great success, deep learning based segmentation methods still face a critical obstacle: the difficulty in acquiring sufficient training data due to high annotation costs. In this paper, we propose a deep active learning framework that combines the attention gated fully convolutional network (ag-FCN) and the distribution discrepancy based active learning algorithm (dd-AL) to significantly reduce the annotation effort by iteratively annotating the most informative samples to train the ag-FCN for the better segmentation performance. Our framework is evaluated on 2015 MICCAI Gland Segmentaion dataset and 2017 MICCAI 6-month infant brain MRI Segmentation dataset. Experiment results show that our framework can achieve state-of-the-art segmentation performance by using only a portion of the training data.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In this paper, the labeled set and unlabeled set refer to the labeled and unlabeled portions of a training dataset, respectively.
 
2
The encoding part of each ag-FCN can be utilized as a feature extractor. Given an input image to K ag-FCNs, the average of outputs of Layer 6 in these ag-FCNs can be viewed as a high-dimensional feature representation of the input image.
 
3
We replace all 2D operations with 3D operations (e.g., 2D conv \(\rightarrow \) 3D conv, etc.).
 
Literatur
1.
Zurück zum Zitat Kamnitsas, K., Ledig, C., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRef Kamnitsas, K., Ledig, C., et al.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med. Image Anal. 36, 61–78 (2017)CrossRef
2.
Zurück zum Zitat Liao, F., Liang, M., et al.: Evaluate the malignancy of pulmonary nodules using the 3-D deep leaky noisy-or network. IEEE Trans. Neural Netw. Learn. Syst. 30(11), 3484–3495 (2019)CrossRef Liao, F., Liang, M., et al.: Evaluate the malignancy of pulmonary nodules using the 3-D deep leaky noisy-or network. IEEE Trans. Neural Netw. Learn. Syst. 30(11), 3484–3495 (2019)CrossRef
3.
Zurück zum Zitat Long, J., Shelhamer, E., et al.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015) Long, J., Shelhamer, E., et al.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
4.
Zurück zum Zitat He, K., Gkioxari, G., et al.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017) He, K., Gkioxari, G., et al.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)
5.
Zurück zum Zitat Papandreou, G., Chen, L.C., et al.: Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. arXiv, arXiv preprint arXiv:1502.02734 (2015) Papandreou, G., Chen, L.C., et al.: Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. arXiv, arXiv preprint arXiv:​1502.​02734 (2015)
6.
Zurück zum Zitat Xiao, H., Wei, Y., et al.: Transferable semi-supervised semantic segmentation. In: AAAI Conference on Artificial Intelligence (2018) Xiao, H., Wei, Y., et al.: Transferable semi-supervised semantic segmentation. In: AAAI Conference on Artificial Intelligence (2018)
7.
Zurück zum Zitat Hong, S., Noh, H., et al.: Decoupled deep neural network for semi-supervised semantic segmentation. In: Advances in Neural Information Processing Systems, pp. 1495–1503 (2015) Hong, S., Noh, H., et al.: Decoupled deep neural network for semi-supervised semantic segmentation. In: Advances in Neural Information Processing Systems, pp. 1495–1503 (2015)
8.
Zurück zum Zitat Settles, B.: Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences (2009) Settles, B.: Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences (2009)
9.
Zurück zum Zitat Dutt Jain, S., Grauman, K.: Active image segmentation propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2864–2873 (2016) Dutt Jain, S., Grauman, K.: Active image segmentation propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2864–2873 (2016)
10.
Zurück zum Zitat Yang, L., Zhang, Y., et al.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 399–407 (2017) Yang, L., Zhang, Y., et al.: Suggestive annotation: a deep active learning framework for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 399–407 (2017)
11.
Zurück zum Zitat Wang, X., Girshick, R., et al.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018) Wang, X., Girshick, R., et al.: Non-local neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803 (2018)
12.
Zurück zum Zitat Sirinukunwattana, K., Pluim, J.P., et al.: Gland segmentation in colon histology images: the glas challenge contest. Med. Image Anal. 35, 489–502 (2017)CrossRef Sirinukunwattana, K., Pluim, J.P., et al.: Gland segmentation in colon histology images: the glas challenge contest. Med. Image Anal. 35, 489–502 (2017)CrossRef
13.
Zurück zum Zitat Wang, L., Nie, D., et al.: Benchmark on automatic six-month-old infant brain segmentation algorithms: the iSeg-2017 challenge. IEEE Trans. Med. Imaging 38(9), 2219–2230 (2019)CrossRef Wang, L., Nie, D., et al.: Benchmark on automatic six-month-old infant brain segmentation algorithms: the iSeg-2017 challenge. IEEE Trans. Med. Imaging 38(9), 2219–2230 (2019)CrossRef
14.
Zurück zum Zitat Graham, S., Chen, H., et al.: MILD-Net: minimal information loss dilated network for gland instance segmentation in colon histology images. Med. Image Anal. 52, 199–211 (2019)CrossRef Graham, S., Chen, H., et al.: MILD-Net: minimal information loss dilated network for gland instance segmentation in colon histology images. Med. Image Anal. 52, 199–211 (2019)CrossRef
15.
Zurück zum Zitat Graham, S., Chen, H., et al.: DCAN: deep contour-aware networks for accurate gland segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2487–2496 (2016) Graham, S., Chen, H., et al.: DCAN: deep contour-aware networks for accurate gland segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2487–2496 (2016)
16.
Zurück zum Zitat Ding, H., Pan, Z., et al.: Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 380, 150–161 (2020)CrossRef Ding, H., Pan, Z., et al.: Multi-scale fully convolutional network for gland segmentation using three-class classification. Neurocomputing 380, 150–161 (2020)CrossRef
18.
Zurück zum Zitat Çiçek, Ö., Abdulkadir, A., et al.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424–432 (2016) Çiçek, Ö., Abdulkadir, A., et al.: 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424–432 (2016)
19.
Zurück zum Zitat Chen, H., Dou, Q., et al.: VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 170, 446–455 (2018)CrossRef Chen, H., Dou, Q., et al.: VoxResNet: deep voxelwise residual networks for brain segmentation from 3D MR images. NeuroImage 170, 446–455 (2018)CrossRef
20.
Zurück zum Zitat Bui, T.D., Shin, J., et al.: Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed. Signal Process. Control 54, 101613 (2019)CrossRef Bui, T.D., Shin, J., et al.: Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed. Signal Process. Control 54, 101613 (2019)CrossRef
Metadaten
Titel
Attention, Suggestion and Annotation: A Deep Active Learning Framework for Biomedical Image Segmentation
verfasst von
Haohan Li
Zhaozheng Yin
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-59710-8_1

Premium Partner