Skip to main content

2021 | OriginalPaper | Buchkapitel

Federated Contrastive Learning for Decentralized Unlabeled Medical Images

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

A label-efficient paradigm in computer vision is based on self-supervised contrastive pre-training on unlabeled data followed by fine-tuning with a small number of labels. Making practical use of a federated computing environment in the clinical domain and learning on medical images poses specific challenges. In this work, we propose FedMoCo, a robust federated contrastive learning (FCL) framework, which makes efficient use of decentralized unlabeled medical data. FedMoCo has two novel modules: metadata transfer, an inter-node statistical data augmentation module, and self-adaptive aggregation, an aggregation module based on representational similarity analysis. To the best of our knowledge, this is the first FCL work on medical images. Our experiments show that FedMoCo can consistently outperform FedAvg, a seminal federated learning framework, in extracting meaningful representations for downstream tasks. We further show that FedMoCo can substantially reduce the amount of labeled data required in a downstream task, such as COVID-19 detection, to achieve a reasonable performance.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
For large-scale datasets, the mean and covariance could be estimated by using random samples from the population. Here, we use the notation \(n_k\) for simplicity.
 
Literatur
1.
Zurück zum Zitat Baldi, P., Pineda, F.: Contrastive learning and neural oscillations. Neural Comput. 3(4), 526–545 (1991)CrossRef Baldi, P., Pineda, F.: Contrastive learning and neural oscillations. Neural Comput. 3(4), 526–545 (1991)CrossRef
2.
Zurück zum Zitat Box, G.E., Cox, D.R.: An analysis of transformations. J. R. Stat. Soc.: Ser. B (Methodol.) 26(2), 211–243 (1964)MATH Box, G.E., Cox, D.R.: An analysis of transformations. J. R. Stat. Soc.: Ser. B (Methodol.) 26(2), 211–243 (1964)MATH
3.
Zurück zum Zitat Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML (2020) Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: ICML (2020)
4.
Zurück zum Zitat Chen, X., Yao, L., Zhou, T., Dong, J., Zhang, Y.: Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. Pattern Recognit. 113, 107826 (2020)CrossRef Chen, X., Yao, L., Zhou, T., Dong, J., Zhang, Y.: Momentum contrastive learning for few-shot COVID-19 diagnosis from chest CT images. Pattern Recognit. 113, 107826 (2020)CrossRef
5.
Zurück zum Zitat Chowdhury, M.E., et al.: Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 8, 132665–132676 (2020)CrossRef Chowdhury, M.E., et al.: Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 8, 132665–132676 (2020)CrossRef
6.
Zurück zum Zitat Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., Xing, E.: Unsupervised domain adaptation for automatic estimation of cardiothoracic ratio. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 544–552. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00934-2_61CrossRef Dong, N., Kampffmeyer, M., Liang, X., Wang, Z., Dai, W., Xing, E.: Unsupervised domain adaptation for automatic estimation of cardiothoracic ratio. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 544–552. Springer, Cham (2018). https://​doi.​org/​10.​1007/​978-3-030-00934-2_​61CrossRef
7.
Zurück zum Zitat Dou, Q.: Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study. NPJ Digit. Med. 4(1), 1–11 (2021)CrossRef Dou, Q.: Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study. NPJ Digit. Med. 4(1), 1–11 (2021)CrossRef
8.
Zurück zum Zitat He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020) He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: CVPR, pp. 9729–9738 (2020)
9.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)
10.
Zurück zum Zitat He, X., et al.: Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. MedRxiv (2020) He, X., et al.: Sample-efficient deep learning for COVID-19 diagnosis based on CT scans. MedRxiv (2020)
11.
Zurück zum Zitat Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI, vol. 33, no. 01, pp. 590–597 (2019) Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. In: AAAI, vol. 33, no. 01, pp. 590–597 (2019)
12.
Zurück zum Zitat Kriegeskorte, N., Mur, M., Bandettini, P.A.: Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008)CrossRef Kriegeskorte, N., Mur, M., Bandettini, P.A.: Representational similarity analysis-connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4 (2008)CrossRef
13.
Zurück zum Zitat Li, M., Andersen, D.G., Smola, A.J., Yu, K.: Communication efficient distributed machine learning with the parameter server. In: NIPS, pp. 19–27 (2014) Li, M., Andersen, D.G., Smola, A.J., Yu, K.: Communication efficient distributed machine learning with the parameter server. In: NIPS, pp. 19–27 (2014)
15.
Zurück zum Zitat McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017) McMahan, B., Moore, E., Ramage, D., Hampson, S., Aguera Arcas, B.: Communication-efficient learning of deep networks from decentralized data. In: AISTATS, pp. 1273–1282. PMLR (2017)
16.
Zurück zum Zitat Misra, I., Maaten, L.V.D.: Self-supervised learning of pretext-invariant representations. In: CVPR, pp. 6707–6717 (2020) Misra, I., Maaten, L.V.D.: Self-supervised learning of pretext-invariant representations. In: CVPR, pp. 6707–6717 (2020)
17.
Zurück zum Zitat Nguyen, H.Q., et al.: VinDr-CXR: an open dataset of chest x-rays with radiologist’s annotations. arXiv preprint arXiv:2012.15029 (2020) Nguyen, H.Q., et al.: VinDr-CXR: an open dataset of chest x-rays with radiologist’s annotations. arXiv preprint arXiv:​2012.​15029 (2020)
18.
Zurück zum Zitat Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748 (2018) Oord, A.V.D., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv preprint arXiv:​1807.​03748 (2018)
19.
Zurück zum Zitat Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)CrossRef Pizer, S.M., et al.: Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 39(3), 355–368 (1987)CrossRef
20.
Zurück zum Zitat Rieke, N., et al.: The future of digital health with federated learning. NPJ Digit. Med. 3(1), 1–7 (2020)CrossRef Rieke, N., et al.: The future of digital health with federated learning. NPJ Digit. Med. 3(1), 1–7 (2020)CrossRef
21.
Zurück zum Zitat de Sa, V.R.: Learning classification with unlabeled data. In: NIPS, pp. 112–119. Citeseer (1994) de Sa, V.R.: Learning classification with unlabeled data. In: NIPS, pp. 112–119. Citeseer (1994)
22.
Zurück zum Zitat Sheller, M.J., et al.: Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10(1), 1–12 (2020)CrossRef Sheller, M.J., et al.: Federated learning in medicine: facilitating multi-institutional collaborations without sharing patient data. Sci. Rep. 10(1), 1–12 (2020)CrossRef
23.
Zurück zum Zitat Sowrirajan, H., Yang, J., Ng, A.Y., Rajpurkar, P.: MoCo pretraining improves representation and transferability of chest x-ray models. arXiv preprint arXiv:2010.05352 (2020) Sowrirajan, H., Yang, J., Ng, A.Y., Rajpurkar, P.: MoCo pretraining improves representation and transferability of chest x-ray models. arXiv preprint arXiv:​2010.​05352 (2020)
24.
Zurück zum Zitat Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., Isola, P.: What makes for good views for contrastive learning. In: NIPS (2020) Tian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., Isola, P.: What makes for good views for contrastive learning. In: NIPS (2020)
25.
Zurück zum Zitat Vandenhende, S., Georgoulis, S., De Brabandere, B., Van Gool, L.: Branched multi-task networks: deciding what layers to share. In: BMVC (2020) Vandenhende, S., Georgoulis, S., De Brabandere, B., Van Gool, L.: Branched multi-task networks: deciding what layers to share. In: BMVC (2020)
26.
Zurück zum Zitat Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: CVPR, pp. 2097–2106 (2017) Wang, X., Peng, Y., Lu, L., Lu, Z., Bagheri, M., Summers, R.M.: ChestX-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: CVPR, pp. 2097–2106 (2017)
27.
Zurück zum Zitat Wu, Z., Xiong, Y., Stella, X.Y., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: CVPR (2018) Wu, Z., Xiong, Y., Stella, X.Y., Lin, D.: Unsupervised feature learning via non-parametric instance discrimination. In: CVPR (2018)
Metadaten
Titel
Federated Contrastive Learning for Decentralized Unlabeled Medical Images
verfasst von
Nanqing Dong
Irina Voiculescu
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-87199-4_36