Skip to main content

2019 | OriginalPaper | Buchkapitel

Doubly Weak Supervision of Deep Learning Models for Head CT

verfasst von : Khaled Saab, Jared Dunnmon, Roger Goldman, Alex Ratner, Hersh Sagreiya, Christopher Ré, Daniel Rubin

Erschienen in: Medical Image Computing and Computer Assisted Intervention – MICCAI 2019

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recent deep learning models for intracranial hemorrhage (ICH) detection on computed tomography of the head have relied upon large datasets hand-labeled at either the full-scan level or at the individual slice-level. Though these models have demonstrated favorable empirical performance, the hand-labeled datasets upon which they rely are time-consuming and expensive to create. Further, given limited time, modelers must currently make an explicit choice between scan-level supervision, which leverages large numbers of patients, and slice-level supervision, which yields clinically insightful output in the axial and in-plane dimensions. In this work, we propose doubly weak supervision, where we (1) weakly label at the scan-level to scalably incorporate data from large populations and (2) model the problem using an attention-based multiple-instance learning approach that can provide useful signal at both axial and in-plane granularities, even with scan-level supervision. Models trained using this doubly weak supervision approach yield an average ROC-AUC score of 0.91, which is competitive with those of models trained using large, hand-labeled datasets, while requiring less than 10 h of clinician labeling time. Further, our models place large attention weights on the same slices used by the clinician to arrive at the ICH classification, and occlusion maps indicate heavy influence from clinically salient in-plane regions.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Evaluated over the 15 cases with radiologist-provided segmentation.
 
Literatur
1.
Zurück zum Zitat Chang, P., et al.: Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT. Am. J. Neuroradiol. 39(9), 1609–1616 (2018)CrossRef Chang, P., et al.: Hybrid 3D/2D convolutional neural network for hemorrhage evaluation on head CT. Am. J. Neuroradiol. 39(9), 1609–1616 (2018)CrossRef
2.
Zurück zum Zitat Chilamkurthy, S., et al.: Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 392(10162), 2388–2396 (2018)CrossRef Chilamkurthy, S., et al.: Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 392(10162), 2388–2396 (2018)CrossRef
3.
Zurück zum Zitat Coles, J.: Imaging after brain injury. Br. J. Anaesth. 99(1), 49–60 (2007)CrossRef Coles, J.: Imaging after brain injury. Br. J. Anaesth. 99(1), 49–60 (2007)CrossRef
4.
Zurück zum Zitat Dunnmon, J.A., Yi, D., Langlotz, C.P., Ré, C., Rubin, D.L., Lungren, M.P.: Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 290, 181422 (2018) Dunnmon, J.A., Yi, D., Langlotz, C.P., Ré, C., Rubin, D.L., Lungren, M.P.: Assessment of convolutional neural networks for automated classification of chest radiographs. Radiology 290, 181422 (2018)
5.
Zurück zum Zitat Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)CrossRef Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115 (2017)CrossRef
6.
Zurück zum Zitat Fries, J.A., et al.: Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences. Nat. Commun. 10(1), 3111 (2019)CrossRef Fries, J.A., et al.: Weakly supervised classification of aortic valve malformations using unlabeled cardiac MRI sequences. Nat. Commun. 10(1), 3111 (2019)CrossRef
7.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
8.
Zurück zum Zitat Ilse, M., Tomczak, J., Welling, M.: Attention-based deep multiple instance learning. In: International Conference on Machine Learning, pp. 2132–2141 (2018) Ilse, M., Tomczak, J., Welling, M.: Attention-based deep multiple instance learning. In: International Conference on Machine Learning, pp. 2132–2141 (2018)
9.
Zurück zum Zitat Jnawali, K., Arbabshirani, M.R., Rao, N., Patel, A.A.: Deep 3D convolution neural network for CT brain hemorrhage classification. In: Medical Imaging 2018: Computer-Aided Diagnosis, vol. 10575, p. 105751C. International Society for Optics and Photonics (2018) Jnawali, K., Arbabshirani, M.R., Rao, N., Patel, A.A.: Deep 3D convolution neural network for CT brain hemorrhage classification. In: Medical Imaging 2018: Computer-Aided Diagnosis, vol. 10575, p. 105751C. International Society for Optics and Photonics (2018)
11.
Zurück zum Zitat Ratner, A., Bach, S.H., Ehrenberg, H., Fries, J., Wu, S., Re, C.: Snorkel: rapid training data creation with weak supervision. Proc. VLDB Endow. 11(3), 269–282 (2017)CrossRef Ratner, A., Bach, S.H., Ehrenberg, H., Fries, J., Wu, S., Re, C.: Snorkel: rapid training data creation with weak supervision. Proc. VLDB Endow. 11(3), 269–282 (2017)CrossRef
12.
Zurück zum Zitat Sun, L., Lu, Y., Yang, K., Li, S.: ECG analysis using multiple instance learning for myocardial infarction detection. IEEE Trans. Biomed. Eng. 59(12), 3348–3356 (2012)CrossRef Sun, L., Lu, Y., Yang, K., Li, S.: ECG analysis using multiple instance learning for myocardial infarction detection. IEEE Trans. Biomed. Eng. 59(12), 3348–3356 (2012)CrossRef
13.
Zurück zum Zitat Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3462–3471 (2017) Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3462–3471 (2017)
14.
Metadaten
Titel
Doubly Weak Supervision of Deep Learning Models for Head CT
verfasst von
Khaled Saab
Jared Dunnmon
Roger Goldman
Alex Ratner
Hersh Sagreiya
Christopher Ré
Daniel Rubin
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-32248-9_90