Skip to main content
Erschienen in: International Journal of Computer Vision 2/2021

14.09.2020

A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains

verfasst von: Lyndon Chan, Mahdi S. Hosseini, Konstantinos N. Plataniotis

Erschienen in: International Journal of Computer Vision | Ausgabe 2/2021

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recently proposed methods for weakly-supervised semantic segmentation have achieved impressive performance in predicting pixel classes despite being trained with only image labels which lack positional information. Because image annotations are cheaper and quicker to generate, weak supervision is more practical than full supervision for training segmentation algorithms. These methods have been predominantly developed to solve the background separation and partial segmentation problems presented by natural scene images and it is unclear whether they can be simply transferred to other domains with different characteristics, such as histopathology and satellite images, and still perform well. This paper evaluates state-of-the-art weakly-supervised semantic segmentation methods on natural scene, histopathology, and satellite image datasets and analyzes how to determine which method is most suitable for a given dataset. Our experiments indicate that histopathology and satellite images present a different set of problems for weakly-supervised semantic segmentation than natural scene images, such as ambiguous boundaries and class co-occurrence. Methods perform well for datasets they were developed on, but tend to perform poorly on other datasets. We present some practical techniques for these methods on unseen datasets and argue that more work is needed for a generalizable approach to weakly-supervised semantic segmentation. Our full code implementation is available on GitHub: https://​github.​com/​lyndonchan/​wsss-analysis.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Ahn, J., & Kwak, S. (2018). Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CoRR. arXiv preprint arXiv:1803.10464. Ahn, J., & Kwak, S. (2018). Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In CoRR. arXiv preprint arXiv:​1803.​10464.
Zurück zum Zitat Ahn, J., Cho, S., & Kwak, S. (2019). Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2209–2218. Ahn, J., Cho, S., & Kwak, S. (2019). Weakly supervised learning of instance segmentation with inter-pixel relations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2209–2218.
Zurück zum Zitat Audebert, N., Le Saux, B., & Lefèvre, S. (2017). Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images. Remote Sensing, 9(4), 368.CrossRef Audebert, N., Le Saux, B., & Lefèvre, S. (2017). Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images. Remote Sensing, 9(4), 368.CrossRef
Zurück zum Zitat Bearman, A., Russakovsky, O., Ferrari, V., & Fei-Fei, L. (2016). What’s the point: Semantic segmentation with point supervision. In European conference on computer vision, Springer, pp. 549–565. Bearman, A., Russakovsky, O., Ferrari, V., & Fei-Fei, L. (2016). What’s the point: Semantic segmentation with point supervision. In European conference on computer vision, Springer, pp. 549–565.
Zurück zum Zitat Beck, A. H., Sangoi, A. R., Leung, S., Marinelli, R. J., Nielsen, T. O., Van De Vijver, M. J., et al. (2011). Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images. Science Translational Medicine, 3(108), 108ra113–108ra113.CrossRef Beck, A. H., Sangoi, A. R., Leung, S., Marinelli, R. J., Nielsen, T. O., Van De Vijver, M. J., et al. (2011). Segment-before-detect: Vehicle detection and classification through semantic segmentation of aerial images. Science Translational Medicine, 3(108), 108ra113–108ra113.CrossRef
Zurück zum Zitat Brostow, G. J., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In European conference on computer vision, Springer, pp. 44–57. Brostow, G. J., Shotton, J., Fauqueur, J., & Cipolla, R. (2008). Segmentation and recognition using structure from motion point clouds. In European conference on computer vision, Springer, pp. 44–57.
Zurück zum Zitat Caesar, H., Uijlings, J., & Ferrari, V. (2018). Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1209–1218. Caesar, H., Uijlings, J., & Ferrari, V. (2018). Coco-stuff: Thing and stuff classes in context. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1209–1218.
Zurück zum Zitat Chan, L., Hosseini, M. S., Rowsell, C., Plataniotis, K. N., & Damaskinos, S. (2019). Histosegnet: Semantic segmentation of histological tissue type in whole slide images. In International conference on computer vision (ICCV). Chan, L., Hosseini, M. S., Rowsell, C., Plataniotis, K. N., & Damaskinos, S. (2019). Histosegnet: Semantic segmentation of histological tissue type in whole slide images. In International conference on computer vision (ICCV).
Zurück zum Zitat Chen, H., Qi, X., Yu, L., Heng, & P. A. (2016). Dcan: Deep contour-aware networks for accurate gland segmentation. In The IEEE conference on computer vision and pattern recognition (CVPR). Chen, H., Qi, X., Yu, L., Heng, & P. A. (2016). Dcan: Deep contour-aware networks for accurate gland segmentation. In The IEEE conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2014). Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv preprint arXiv:14127062. Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2014). Semantic image segmentation with deep convolutional nets and fully connected CRFs. arXiv preprint arXiv:​14127062.
Zurück zum Zitat Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017a). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848.CrossRef Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2017a). Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 834–848.CrossRef
Zurück zum Zitat Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017b). Rethinking Atrous convolution for semantic image segmentation. arXiv preprint arXiv:170605587. Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017b). Rethinking Atrous convolution for semantic image segmentation. arXiv preprint arXiv:​170605587.
Zurück zum Zitat Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with Atrous separableconvolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801–818. Chen, L. C., Zhu, Y., Papandreou, G., Schroff, F., & Adam, H. (2018). Encoder-decoder with Atrous separableconvolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pp. 801–818.
Zurück zum Zitat Ciresan, D., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2012). Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in neural information processing systems, Vol. 25, Curran Associates, Inc., pp. 2843–2851. Ciresan, D., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2012). Deep neural networks segment neuronal membranes in electron microscopy images. In Advances in neural information processing systems, Vol. 25, Curran Associates, Inc., pp. 2843–2851.
Zurück zum Zitat Cireşan, D. C., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2013). Mitosis detection in breast cancer histology images with deep neural networks. Medical image computing and computer-assisted intervention- MICCAI 2013 (pp. 411–418). Berlin: Springer.CrossRef Cireşan, D. C., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2013). Mitosis detection in breast cancer histology images with deep neural networks. Medical image computing and computer-assisted intervention- MICCAI 2013 (pp. 411–418). Berlin: Springer.CrossRef
Zurück zum Zitat Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3213–3223.
Zurück zum Zitat Dai, J., He, K., & Sun, J. (2015). Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 1635–1643. Dai, J., He, K., & Sun, J. (2015). Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 1635–1643.
Zurück zum Zitat Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., & Raskar, R. (2018). Deepglobe 2018: A challenge to parse the earth through satellite images. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops. Demir, I., Koperski, K., Lindenbaum, D., Pang, G., Huang, J., Basu, S., Hughes, F., Tuia, D., & Raskar, R. (2018). Deepglobe 2018: A challenge to parse the earth through satellite images. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops.
Zurück zum Zitat Durand, T., Mordan, T., Thome, N., & Cord, M. (2017). Wildcat: Weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 642–651. Durand, T., Mordan, T., Thome, N., & Cord, M. (2017). Wildcat: Weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 642–651.
Zurück zum Zitat Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The Pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRef Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The Pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2), 303–338.CrossRef
Zurück zum Zitat Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2012). Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1915–1929.CrossRef Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2012). Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1915–1929.CrossRef
Zurück zum Zitat Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., & Lu, H. (2019). Dual attention network for scene segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3146–3154. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., & Lu, H. (2019). Dual attention network for scene segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3146–3154.
Zurück zum Zitat Gao, J., Liao, W., Nuyttens, D., Lootens, P., Vangeyte, J., Pižurica, A., et al. (2018). Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. International Journal of Applied Earth Observation and Geoinformation, 67, 43–53.CrossRef Gao, J., Liao, W., Nuyttens, D., Lootens, P., Vangeyte, J., Pižurica, A., et al. (2018). Fusion of pixel and object-based features for weed mapping using unmanned aerial vehicle imagery. International Journal of Applied Earth Observation and Geoinformation, 67, 43–53.CrossRef
Zurück zum Zitat Hariharan, B., Arbelaez, P., Bourdev, L., Maji, S., & Malik, J. (2011). Semantic contours from inverse detectors. In International conference on computer vision (ICCV). Hariharan, B., Arbelaez, P., Bourdev, L., Maji, S., & Malik, J. (2011). Semantic contours from inverse detectors. In International conference on computer vision (ICCV).
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778.
Zurück zum Zitat Helber, P., Bischke, B., Dengel, A., & Borth, D. (2019). Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. Helber, P., Bischke, B., Dengel, A., & Borth, D. (2019). Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.
Zurück zum Zitat Hosseini, M. S., Chan, L., Tse, G., Tang, M., Deng, J., Norouzi, S., Rowsell, C., Plataniotis, K. N., & Damaskinos, S. (2019). Atlas of digital pathology: A generalized hierarchical histological tissue type-annotated database for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 11747–11756. Hosseini, M. S., Chan, L., Tse, G., Tang, M., Deng, J., Norouzi, S., Rowsell, C., Plataniotis, K. N., & Damaskinos, S. (2019). Atlas of digital pathology: A generalized hierarchical histological tissue type-annotated database for deep learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 11747–11756.
Zurück zum Zitat Hou, L., Samaras, D., Kurc, T. M., Gao, Y., Davis, J. E., & Saltz, J. H. (2016). Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2424–2433. Hou, L., Samaras, D., Kurc, T. M., Gao, Y., Davis, J. E., & Saltz, J. H. (2016). Patch-based convolutional neural network for whole slide tissue image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2424–2433.
Zurück zum Zitat Huang, Z., Wang, X., Wang, J., Liu, W., & Wang, J. (2018). Weakly-supervised semantic segmentation network with deep seeded region growing. In 2018 IEEE/CVF conference on computer vision and pattern recognition, pp. 7014–7023. Huang, Z., Wang, X., Wang, J., Liu, W., & Wang, J. (2018). Weakly-supervised semantic segmentation network with deep seeded region growing. In 2018 IEEE/CVF conference on computer vision and pattern recognition, pp. 7014–7023.
Zurück zum Zitat Jia, Z., Huang, X., Eric, I., Chang, C., & Xu, Y. (2017). Constrained deep weak supervision for histopathology image segmentation. IEEE Transactions on Medical Imaging, 36(11), 2376–2388.CrossRef Jia, Z., Huang, X., Eric, I., Chang, C., & Xu, Y. (2017). Constrained deep weak supervision for histopathology image segmentation. IEEE Transactions on Medical Imaging, 36(11), 2376–2388.CrossRef
Zurück zum Zitat Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., & Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2083–2090. Jiang, H., Wang, J., Yuan, Z., Wu, Y., Zheng, N., & Li, S. (2013). Salient object detection: A discriminative regional feature integration approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2083–2090.
Zurück zum Zitat Kainz, P., Pfeiffer, M., & Urschler, M. (2015). Semantic segmentation of colon glands with deep convolutional neural networks and total variation segmentation. In CoRR. arXiv preprint arXiv:1511.06919. Kainz, P., Pfeiffer, M., & Urschler, M. (2015). Semantic segmentation of colon glands with deep convolutional neural networks and total variation segmentation. In CoRR. arXiv preprint arXiv:​1511.​06919.
Zurück zum Zitat Kather, J. N., Krisam, J., Charoentong, P., Luedde, T., Herpel, E., Weis, C. A., et al. (2019). Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Medicine, 16(1), e1002730.CrossRef Kather, J. N., Krisam, J., Charoentong, P., Luedde, T., Herpel, E., Weis, C. A., et al. (2019). Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLoS Medicine, 16(1), e1002730.CrossRef
Zurück zum Zitat Kather, J. N., Weis, C. A., Bianconi, F., Melchers, S. M., Schad, L. R., Gaiser, T., et al. (2016). Multi-class texture analysis in colorectal cancer histology. Scientific Reports, 6, 27988.CrossRef Kather, J. N., Weis, C. A., Bianconi, F., Melchers, S. M., Schad, L. R., Gaiser, T., et al. (2016). Multi-class texture analysis in colorectal cancer histology. Scientific Reports, 6, 27988.CrossRef
Zurück zum Zitat Kervadec, H., Dolz, J., Tang, M., Granger, E., Boykov, Y., & Ayed, I. B. (2019). Constrained-cnn losses for weakly supervised segmentation. Medical Image Analysis, 54, 88–99.CrossRef Kervadec, H., Dolz, J., Tang, M., Granger, E., Boykov, Y., & Ayed, I. B. (2019). Constrained-cnn losses for weakly supervised segmentation. Medical Image Analysis, 54, 88–99.CrossRef
Zurück zum Zitat Kolesnikov, A., & Lampert, C. H. (2016a). Improving weakly-supervised object localization by micro-annotation. arXiv preprint arXiv:160505538. Kolesnikov, A., & Lampert, C. H. (2016a). Improving weakly-supervised object localization by micro-annotation. arXiv preprint arXiv:​160505538.
Zurück zum Zitat Kolesnikov, A., & Lampert, C. H. (2016b). Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In CoRR. arXiv preprint arXiv:1603.06098. Kolesnikov, A., & Lampert, C. H. (2016b). Seed, expand and constrain: Three principles for weakly-supervised image segmentation. In CoRR. arXiv preprint arXiv:​1603.​06098.
Zurück zum Zitat Kothari, S., Phan, J. H., Young, A. N., & Wang, M. D. (2013). Histological image classification using biologically interpretable shape-based features. BMC Medical Imaging, 13(1), 9.CrossRef Kothari, S., Phan, J. H., Young, A. N., & Wang, M. D. (2013). Histological image classification using biologically interpretable shape-based features. BMC Medical Imaging, 13(1), 9.CrossRef
Zurück zum Zitat Krähenbühl, P., & Koltun, V. (2011). Efficient inference in fully connected CRFs with Gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117. Krähenbühl, P., & Koltun, V. (2011). Efficient inference in fully connected CRFs with Gaussian edge potentials. In Advances in neural information processing systems, pp. 109–117.
Zurück zum Zitat Kuo, T. S., Tseng, K. S., Yan, J. W., Liu, Y. C., & Wang, Y. C. F. (2018). Deep aggregation net for land cover classification. In CVPR workshops, pp. 252–256 Kuo, T. S., Tseng, K. S., Yan, J. W., Liu, Y. C., & Wang, Y. C. F. (2018). Deep aggregation net for land cover classification. In CVPR workshops, pp. 252–256
Zurück zum Zitat Kwak, S., Hong, S., & Han, B. (2017). Weakly supervised semantic segmentation using superpixel pooling network. In Thirty-first AAAI conference on artificial intelligence. Kwak, S., Hong, S., & Han, B. (2017). Weakly supervised semantic segmentation using superpixel pooling network. In Thirty-first AAAI conference on artificial intelligence.
Zurück zum Zitat Lee, J., Kim, E., Lee, S., Lee, J., & Yoon, S. (2019). Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5267–5276. Lee, J., Kim, E., Lee, S., Lee, J., & Yoon, S. (2019). Ficklenet: Weakly and semi-supervised semantic image segmentation using stochastic inference. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5267–5276.
Zurück zum Zitat Lenz, M., Roumans, N. J., Vink, R. G., van Baak, M. A., Mariman, E. C., Arts, I. C., et al. (2016). Estimating real cell size distribution from cross-section microscopy imaging. Bioinformatics, 32(17), i396–i404.CrossRef Lenz, M., Roumans, N. J., Vink, R. G., van Baak, M. A., Mariman, E. C., Arts, I. C., et al. (2016). Estimating real cell size distribution from cross-section microscopy imaging. Bioinformatics, 32(17), i396–i404.CrossRef
Zurück zum Zitat Li, W., Manivannan, S., Akbar, S., Zhang, J., Trucco, E., & McKenna, S. J. (2016). Gland segmentation in colon histology images using hand-crafted features and convolutional neural networks. In 2016 IEEE 13th international symposium on biomedical imaging (ISBI), pp. 1405–1408. https://doi.org/10.1109/ISBI.2016.7493530. Li, W., Manivannan, S., Akbar, S., Zhang, J., Trucco, E., & McKenna, S. J. (2016). Gland segmentation in colon histology images using hand-crafted features and convolutional neural networks. In 2016 IEEE 13th international symposium on biomedical imaging (ISBI), pp. 1405–1408. https://​doi.​org/​10.​1109/​ISBI.​2016.​7493530.​
Zurück zum Zitat Lin, D., Dai, J., Jia, J., He, K., & Sun, J. (2016). Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3159–3167. Lin, D., Dai, J., Jia, J., He, K., & Sun, J. (2016). Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3159–3167.
Zurück zum Zitat Lin, H., Chen, H., Dou, Q., Wang, L., Qin, J., & Heng, P. (2017a). Scannet: A fast and dense scanning framework for metastatic breast cancer detection from whole-slide images. In CoRR. arXiv preprint arXiv:1707.09597. Lin, H., Chen, H., Dou, Q., Wang, L., Qin, J., & Heng, P. (2017a). Scannet: A fast and dense scanning framework for metastatic breast cancer detection from whole-slide images. In CoRR. arXiv preprint arXiv:​1707.​09597.
Zurück zum Zitat Lin, H., Chen, H., Dou, Q., Wang, L., Qin, J., & Heng, P. A. (2018). Scannet: A fast and dense scanning framework for metastastic breast cancer detection from whole-slide image. In 2018 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 539–546. Lin, H., Chen, H., Dou, Q., Wang, L., Qin, J., & Heng, P. A. (2018). Scannet: A fast and dense scanning framework for metastastic breast cancer detection from whole-slide image. In 2018 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 539–546.
Zurück zum Zitat Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017b). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., & Belongie, S. (2017b). Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125.
Zurück zum Zitat Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision, Springer, pp. 740–755. Lin, T. Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C. L. (2014). Microsoft coco: Common objects in context. In European conference on computer vision, Springer, pp. 740–755.
Zurück zum Zitat Liu, C., Yuen, J., & Torralba, A. (2010). Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5), 978–994.CrossRef Liu, C., Yuen, J., & Torralba, A. (2010). Sift flow: Dense correspondence across scenes and its applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(5), 978–994.CrossRef
Zurück zum Zitat Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440. Long, J., Shelhamer, E., & Darrell, T. (2015). Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431–3440.
Zurück zum Zitat Malon, C., & Cosatto, E. (2013). Classification of mitotic figures with convolutional neural networks and seeded blob features. In Journal of pathology informatics. Malon, C., & Cosatto, E. (2013). Classification of mitotic figures with convolutional neural networks and seeded blob features. In Journal of pathology informatics.
Zurück zum Zitat Mnih, V., & Hinton, G. E. (2010). Learning to detect roads in high-resolution aerial images. In European conference on computer vision, Springer, pp. 210–223. Mnih, V., & Hinton, G. E. (2010). Learning to detect roads in high-resolution aerial images. In European conference on computer vision, Springer, pp. 210–223.
Zurück zum Zitat Mottaghi, R., Chen, X., Liu, X., Cho, N. G., Lee, S. W., Fidler, S., Urtasun, R., & Yuille, A. (2014). The role of context for object detection and semantic segmentation in the wild. In IEEE conference on computer vision and pattern recognition (CVPR). Mottaghi, R., Chen, X., Liu, X., Cho, N. G., Lee, S. W., Fidler, S., Urtasun, R., & Yuille, A. (2014). The role of context for object detection and semantic segmentation in the wild. In IEEE conference on computer vision and pattern recognition (CVPR).
Zurück zum Zitat Neuhold, G., Ollmann, T., Rota Bulo, S., & Kontschieder, P. (2017). The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pp. 4990–4999. Neuhold, G., Ollmann, T., Rota Bulo, S., & Kontschieder, P. (2017). The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pp. 4990–4999.
Zurück zum Zitat Nivaggioli, A., & Randrianarivo, H. (2019). Weakly supervised semantic segmentation of satellite images. arXiv preprint arXiv:190403983. Nivaggioli, A., & Randrianarivo, H. (2019). Weakly supervised semantic segmentation of satellite images. arXiv preprint arXiv:​190403983.
Zurück zum Zitat Papandreou, G., Chen, L., Murphy, K., & Yuille, A. L. (2015). Weakly- and semi-supervised learning of a DCNN for semantic image segmentation. In CoRR. arXiv preprint arXiv:1502.02734 Papandreou, G., Chen, L., Murphy, K., & Yuille, A. L. (2015). Weakly- and semi-supervised learning of a DCNN for semantic image segmentation. In CoRR. arXiv preprint arXiv:​1502.​02734
Zurück zum Zitat Pathak, D., Krahenbuhl, P., & Darrell, T. (2015). Constrained convolutional neural networks for weakly supervised segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 1796–1804. Pathak, D., Krahenbuhl, P., & Darrell, T. (2015). Constrained convolutional neural networks for weakly supervised segmentation. In Proceedings of the IEEE international conference on computer vision, pp. 1796–1804.
Zurück zum Zitat Pathak, D., Shelhamer, E., Long, J., & Darrell, T. (2014). Fully convolutional multi-class multiple instance learning. In CoRR. arXiv preprint arXiv:1412.7144 Pathak, D., Shelhamer, E., Long, J., & Darrell, T. (2014). Fully convolutional multi-class multiple instance learning. In CoRR. arXiv preprint arXiv:​1412.​7144
Zurück zum Zitat Prince, S. J. (2012a). Computer vision: Models, learning, and inference (pp. 201–208). Cambridge: Cambridge University Press.CrossRef Prince, S. J. (2012a). Computer vision: Models, learning, and inference (pp. 201–208). Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Prince, S. J. (2012b). Computer vision: Models, learning, and inference (p. 15). Cambridge: Cambridge University Press.CrossRef Prince, S. J. (2012b). Computer vision: Models, learning, and inference (p. 15). Cambridge: Cambridge University Press.CrossRef
Zurück zum Zitat Rahnemoonfar, M., Murphy, R., Miquel, M.V., Dobbs, D., & Adams, A. (2018). Flooded area detection from UAV images based on densely connected recurrent neural networks. In IGARSS 2018-2018 IEEE international geoscience and remote sensing symposium, IEEE, pp. 1788–1791. Rahnemoonfar, M., Murphy, R., Miquel, M.V., Dobbs, D., & Adams, A. (2018). Flooded area detection from UAV images based on densely connected recurrent neural networks. In IGARSS 2018-2018 IEEE international geoscience and remote sensing symposium, IEEE, pp. 1788–1791.
Zurück zum Zitat Riordan, D. P., Varma, S., West, R. B., & Brown, P. O. (2015). Automated analysis and classification of histological tissue features by multi-dimensional microscopic molecular profiling. PLoS ONE, 10(7), e0128975.CrossRef Riordan, D. P., Varma, S., West, R. B., & Brown, P. O. (2015). Automated analysis and classification of histological tissue features by multi-dimensional microscopic molecular profiling. PLoS ONE, 10(7), e0128975.CrossRef
Zurück zum Zitat Robinson, C., Hou, L., Malkin, K., Soobitsky, R., Czawlytko, J., Dilkina, B., & Jojic, N. (2019). Large scale high-resolution land cover mapping with multi-resolution data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 12726–12735. Robinson, C., Hou, L., Malkin, K., Soobitsky, R., Czawlytko, J., Dilkina, B., & Jojic, N. (2019). Large scale high-resolution land cover mapping with multi-resolution data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 12726–12735.
Zurück zum Zitat Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, Springer, pp. 234–241. Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, Springer, pp. 234–241.
Zurück zum Zitat Roux, L., Racoceanu, D., Loménie, N., Kulikova, M., Irshad, H., Klossa, J., Capron, F., Genestie, C., le Naour, G., & Gurcan, M. N. (2013). Mitosis detection in breast cancer histological images an ICPR 2012 contest. In Journal of pathology informatics. Roux, L., Racoceanu, D., Loménie, N., Kulikova, M., Irshad, H., Klossa, J., Capron, F., Genestie, C., le Naour, G., & Gurcan, M. N. (2013). Mitosis detection in breast cancer histological images an ICPR 2012 contest. In Journal of pathology informatics.
Zurück zum Zitat Saleh, F., Akbarian, M. S. A., Salzmann, M., Petersson, L., Gould, S., & Alvarez, J. M. (2016). Built-in foreground/background prior for weakly-supervised semantic segmentation. In CoRR. arXiv preprint arXiv:abs/1609.00446 Saleh, F., Akbarian, M. S. A., Salzmann, M., Petersson, L., Gould, S., & Alvarez, J. M. (2016). Built-in foreground/background prior for weakly-supervised semantic segmentation. In CoRR. arXiv preprint arXiv:​abs/​1609.​00446
Zurück zum Zitat Seferbekov, S. S., Iglovikov, V., Buslaev, A., & Shvets, A. (2018). Feature pyramid network for multi-class land segmentation. In CVPR workshops, pp. 272–275. Seferbekov, S. S., Iglovikov, V., Buslaev, A., & Shvets, A. (2018). Feature pyramid network for multi-class land segmentation. In CVPR workshops, pp. 272–275.
Zurück zum Zitat Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision, pp. 618–626.
Zurück zum Zitat Shapiro, L., & Stockman, G. (2000). Computer vision (pp. 305–306). London: Pearson. Shapiro, L., & Stockman, G. (2000). Computer vision (pp. 305–306). London: Pearson.
Zurück zum Zitat Shimoda, W., & Yanai, K. (2016). Distinct class-specific saliency maps for weakly supervised semantic segmentation. In European conference on computer vision, Springer, pp. 218–234. Shimoda, W., & Yanai, K. (2016). Distinct class-specific saliency maps for weakly supervised semantic segmentation. In European conference on computer vision, Springer, pp. 218–234.
Zurück zum Zitat Shkolyar, A., Gefen, A., Benayahu, D., & Greenspan, H. (2015). Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using convolutional neural networks. In 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 743–746. Shkolyar, A., Gefen, A., Benayahu, D., & Greenspan, H. (2015). Automatic detection of cell divisions (mitosis) in live-imaging microscopy images using convolutional neural networks. In 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC), pp. 743–746.
Zurück zum Zitat Shotton, J., Winn, J., Rother, C., & Criminisi, A. (2006). Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In European conference on computer vision, Springer, pp. 1–15. Shotton, J., Winn, J., Rother, C., & Criminisi, A. (2006). Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In European conference on computer vision, Springer, pp. 1–15.
Zurück zum Zitat Sirinukunwattana, K., Pluim, J. P., Chen, H., Qi, X., Heng, P. A., Guo, Y. B., et al. (2017). Gland segmentation in colon histology images: The glas challenge contest. Medical Image Analysis, 35, 489–502.CrossRef Sirinukunwattana, K., Pluim, J. P., Chen, H., Qi, X., Heng, P. A., Guo, Y. B., et al. (2017). Gland segmentation in colon histology images: The glas challenge contest. Medical Image Analysis, 35, 489–502.CrossRef
Zurück zum Zitat Smith, L. N. (2017). Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 464–472. Smith, L. N. (2017). Cyclical learning rates for training neural networks. In 2017 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 464–472.
Zurück zum Zitat Tian, C., Li, C., & Shi, J. (2018). Dense fusion classmate network for land cover classification. In CVPR workshops, pp. 192–196. Tian, C., Li, C., & Shi, J. (2018). Dense fusion classmate network for land cover classification. In CVPR workshops, pp. 192–196.
Zurück zum Zitat Tian, C., Li, C., & Shi, J. (2019). Dense fusion classmate network for land cover classification. arXiv preprint arXiv:191108169. Tian, C., Li, C., & Shi, J. (2019). Dense fusion classmate network for land cover classification. arXiv preprint arXiv:​191108169.
Zurück zum Zitat Tsoumakas, G., Katakis, I., & Vlahavas, I. (2009). Mining multi-label data. In Data mining and knowledge discovery handbook, Springer, pp. 667–685. Tsoumakas, G., Katakis, I., & Vlahavas, I. (2009). Mining multi-label data. In Data mining and knowledge discovery handbook, Springer, pp. 667–685.
Zurück zum Zitat Wang, X., Chen, H., Gan, C. H. K., Lin, H., Dou, Q., Huang, Q., Cai, M., & Heng, P. A. (2018). Weakly supervised learning for whole slide lung cancer image classification. In IEEE transactions on cybernetics. Wang, X., Chen, H., Gan, C. H. K., Lin, H., Dou, Q., Huang, Q., Cai, M., & Heng, P. A. (2018). Weakly supervised learning for whole slide lung cancer image classification. In IEEE transactions on cybernetics.
Zurück zum Zitat Wang, P., Huang, X., Cheng, X., Zhou, D., Geng, Q., & Yang, R. (2019b). The apolloscape open dataset for autonomous driving and its application. In IEEE transactions on pattern analysis and machine intelligence. Wang, P., Huang, X., Cheng, X., Zhou, D., Geng, Q., & Yang, R. (2019b). The apolloscape open dataset for autonomous driving and its application. In IEEE transactions on pattern analysis and machine intelligence.
Zurück zum Zitat Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., et al. (2019a). Deep high-resolution representation learning for visual recognition. arXiv preprint arXiv:190807919. Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., et al. (2019a). Deep high-resolution representation learning for visual recognition. arXiv preprint arXiv:​190807919.
Zurück zum Zitat Wei, Y., Feng, J., Liang, X., Cheng, M., Zhao, Y., & Yan, S. (2017). Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CoRR. arXiv preprint arXiv:1703.08448 Wei, Y., Feng, J., Liang, X., Cheng, M., Zhao, Y., & Yan, S. (2017). Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. In CoRR. arXiv preprint arXiv:​1703.​08448
Zurück zum Zitat Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., & Huang, T. S. (2018). Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7268–7277. Wei, Y., Xiao, H., Shi, H., Jie, Z., Feng, J., & Huang, T. S. (2018). Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7268–7277.
Zurück zum Zitat Xia, F., Wang, P., Chen, X., & Yuille, A. L. (2017). Joint multi-person pose estimation and semantic part segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6769–6778. Xia, F., Wang, P., Chen, X., & Yuille, A. L. (2017). Joint multi-person pose estimation and semantic part segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6769–6778.
Zurück zum Zitat Xie, J., Liu, R., Luttrell, I., Zhang, C., et al. (2019). Deep learning based analysis of histopathological images of breast cancer. Frontiers in Genetics, 10, 80.CrossRef Xie, J., Liu, R., Luttrell, I., Zhang, C., et al. (2019). Deep learning based analysis of histopathological images of breast cancer. Frontiers in Genetics, 10, 80.CrossRef
Zurück zum Zitat Xu, J., Schwing, A. G., & Urtasun, R. (2015). Learning to segment under various forms of weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3781–3790. Xu, J., Schwing, A. G., & Urtasun, R. (2015). Learning to segment under various forms of weak supervision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3781–3790.
Zurück zum Zitat Xu, Y., Jia, Z., Wang, L. B., Ai, Y., Zhang, F., Lai, M., et al. (2017). Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics, 18(1), 281.CrossRef Xu, Y., Jia, Z., Wang, L. B., Ai, Y., Zhang, F., Lai, M., et al. (2017). Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features. BMC Bioinformatics, 18(1), 281.CrossRef
Zurück zum Zitat Xu, J., Luo, X., Wang, G., Gilmore, H., & Madabhushi, A. (2016). A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing, 191, 214–223.CrossRef Xu, J., Luo, X., Wang, G., Gilmore, H., & Madabhushi, A. (2016). A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing, 191, 214–223.CrossRef
Zurück zum Zitat Xu, Y., Zhu, J. Y., Eric, I., Chang, C., Lai, M., & Tu, Z. (2014). Weakly supervised histopathology cancer image segmentation and classification. Medical Image Analysis, 18(3), 591–604.CrossRef Xu, Y., Zhu, J. Y., Eric, I., Chang, C., Lai, M., & Tu, Z. (2014). Weakly supervised histopathology cancer image segmentation and classification. Medical Image Analysis, 18(3), 591–604.CrossRef
Zurück zum Zitat Yang, Y., & Newsam, S. (2010). Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems, ACM, pp. 270–279. Yang, Y., & Newsam, S. (2010). Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems, ACM, pp. 270–279.
Zurück zum Zitat Yao, X., Han, J., Cheng, G., Qian, X., & Guo, L. (2016). Semantic annotation of high-resolution satellite images via weakly supervised learning. IEEE Transactions on Geoscience and Remote Sensing, 54(6), 3660–3671.CrossRef Yao, X., Han, J., Cheng, G., Qian, X., & Guo, L. (2016). Semantic annotation of high-resolution satellite images via weakly supervised learning. IEEE Transactions on Geoscience and Remote Sensing, 54(6), 3660–3671.CrossRef
Zurück zum Zitat Ye, L., Liu, Z., & Wang, Y. (2018). Learning semantic segmentation with diverse supervision. In 2018 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 1461–1469. Ye, L., Liu, Z., & Wang, Y. (2018). Learning semantic segmentation with diverse supervision. In 2018 IEEE winter conference on applications of computer vision (WACV), IEEE, pp. 1461–1469.
Zurück zum Zitat Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., & Darrell, T. (2018). Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:180504687. Yu, F., Xian, W., Chen, Y., Liu, F., Liao, M., Madhavan, V., & Darrell, T. (2018). Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:​180504687.
Zurück zum Zitat Yuan, Y., Chen, X., & Wang, J. (2019). Object-contextual representations for semantic segmentation. arXiv preprint arXiv:190911065. Yuan, Y., Chen, X., & Wang, J. (2019). Object-contextual representations for semantic segmentation. arXiv preprint arXiv:​190911065.
Zurück zum Zitat Zhang, C., Li , H., Wang, X., & Yang, X. (2015a). Cross-scene crowd counting via deep convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 833–841. Zhang, C., Li , H., Wang, X., & Yang, X. (2015a). Cross-scene crowd counting via deep convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 833–841.
Zurück zum Zitat Zhang, C., Wei, S., Ji, S., & Lu, M. (2019). Detecting large-scale urban land cover changes from very high resolution remote sensing images using cnn-based classification. ISPRS International Journal of Geo-Information, 8(4), 189.CrossRef Zhang, C., Wei, S., Ji, S., & Lu, M. (2019). Detecting large-scale urban land cover changes from very high resolution remote sensing images using cnn-based classification. ISPRS International Journal of Geo-Information, 8(4), 189.CrossRef
Zurück zum Zitat Zhang, X., Su, H., Yang, L., & Zhang, S. (2015b). Fine-grained histopathological image analysis via robust segmentation and large-scale retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5361–5368. Zhang, X., Su, H., Yang, L., & Zhang, S. (2015b). Fine-grained histopathological image analysis via robust segmentation and large-scale retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5361–5368.
Zurück zum Zitat Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921–2929.
Zurück zum Zitat Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 633–641. Zhou, B., Zhao, H., Puig, X., Fidler, S., Barriuso, A., & Torralba, A. (2017). Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 633–641.
Zurück zum Zitat Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., & Jiao, J. (2018). Weakly supervised instance segmentation using class peak response. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3791–3800. Zhou, Y., Zhu, Y., Ye, Q., Qiu, Q., & Jiao, J. (2018). Weakly supervised instance segmentation using class peak response. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3791–3800.
Metadaten
Titel
A Comprehensive Analysis of Weakly-Supervised Semantic Segmentation in Different Image Domains
verfasst von
Lyndon Chan
Mahdi S. Hosseini
Konstantinos N. Plataniotis
Publikationsdatum
14.09.2020
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 2/2021
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-020-01373-4

Weitere Artikel der Ausgabe 2/2021

International Journal of Computer Vision 2/2021 Zur Ausgabe

Premium Partner