Skip to main content

2016 | OriginalPaper | Buchkapitel

Convolutional Oriented Boundaries

verfasst von : Kevis-Kokitsi Maninis, Jordi Pont-Tuset, Pablo Arbeláez, Luc Van Gool

Erschienen in: Computer Vision – ECCV 2016

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We present Convolutional Oriented Boundaries (COB), which produces multiscale oriented contours and region hierarchies starting from generic image classification Convolutional Neural Networks (CNNs). COB is computationally efficient, because it requires a single CNN forward pass for contour detection and it uses a novel sparse boundary representation for hierarchical segmentation; it gives a significant leap in performance over the state-of-the-art, and it generalizes very well to unseen categories and datasets. Particularly, we show that learning to estimate not only contour strength but also orientation provides more accurate results. We perform extensive experiments on BSDS, PASCAL Context, PASCAL Segmentation, and MS-COCO, showing that COB provides state-of-the-art contours, region hierarchies, and object proposals in all datasets.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Kokkinos, I.: Pushing the boundaries of boundary detection using deep learning. In: ICLR (2016) Kokkinos, I.: Pushing the boundaries of boundary detection using deep learning. In: ICLR (2016)
2.
Zurück zum Zitat Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV (2015) Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV (2015)
3.
Zurück zum Zitat Bertasius, G., Shi, J., Torresani, L.: Deepedge: a multi-scale bifurcated deep network for top-down contour detection. In: CVPR (2015) Bertasius, G., Shi, J., Torresani, L.: Deepedge: a multi-scale bifurcated deep network for top-down contour detection. In: CVPR (2015)
4.
Zurück zum Zitat Bertasius, G., Shi, J., Torresani, L.: High-for-low and low-for-high: efficient boundary detection from deep object features and its applications to high-level vision. In: ICCV (2015) Bertasius, G., Shi, J., Torresani, L.: High-for-low and low-for-high: efficient boundary detection from deep object features and its applications to high-level vision. In: ICCV (2015)
5.
Zurück zum Zitat Shen, W., Wang, X., Wang, Y., Bai, X., Zhang, Z.: Deepcontour: a deep convolutional feature learned by positive-sharing loss for contour detection. In: CVPR (2015) Shen, W., Wang, X., Wang, Y., Bai, X., Zhang, Z.: Deepcontour: a deep convolutional feature learned by positive-sharing loss for contour detection. In: CVPR (2015)
6.
Zurück zum Zitat Ganin, Y., Lempitsky, V.: \(N^4\)-Fields: Neural Network Nearest Neighbor Fields for Image Transforms. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9004, pp. 536–551. Springer, Heidelberg (2015) Ganin, Y., Lempitsky, V.: \(N^4\)-Fields: Neural Network Nearest Neighbor Fields for Image Transforms. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H. (eds.) ACCV 2014. LNCS, vol. 9004, pp. 536–551. Springer, Heidelberg (2015)
7.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)
8.
Zurück zum Zitat Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015) Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: CVPR (2015)
9.
Zurück zum Zitat Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015) Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)
10.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)
11.
Zurück zum Zitat Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001) Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)
13.
Zurück zum Zitat Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011) Hariharan, B., Arbeláez, P., Bourdev, L., Maji, S., Malik, J.: Semantic contours from inverse detectors. In: ICCV (2011)
14.
Zurück zum Zitat Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: CVPR (2014) Mottaghi, R., Chen, X., Liu, X., Cho, N.G., Lee, S.W., Fidler, S., Urtasun, R., Yuille, A.: The role of context for object detection and semantic segmentation in the wild. In: CVPR (2014)
15.
Zurück zum Zitat Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. TPAMI 33(5), 898–916 (2011)CrossRef Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. TPAMI 33(5), 898–916 (2011)CrossRef
16.
Zurück zum Zitat Pont-Tuset, J., Arbeláez, P., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping for image segmentation and object proposal generation. In: TPAMI (2016) Pont-Tuset, J., Arbeláez, P., Barron, J., Marques, F., Malik, J.: Multiscale combinatorial grouping for image segmentation and object proposal generation. In: TPAMI (2016)
17.
Zurück zum Zitat Maninis, K., Pont-Tuset, J., Arbeláez, P., Gool, L.V.: Deep retinal image understanding. In: MICCAI (2016) Maninis, K., Pont-Tuset, J., Arbeláez, P., Gool, L.V.: Deep retinal image understanding. In: MICCAI (2016)
18.
Zurück zum Zitat Bertasius, G., Shi, J., Torresani, L.: Semantic segmentation with boundary neural fields. In: CVPR (2016) Bertasius, G., Shi, J., Torresani, L.: Semantic segmentation with boundary neural fields. In: CVPR (2016)
19.
Zurück zum Zitat Khoreva, A., Benenson, R., Omran, M., Hein, M., Schiele, B.: Weakly supervised object boundaries. In: CVPR (2016) Khoreva, A., Benenson, R., Omran, M., Hein, M., Schiele, B.: Weakly supervised object boundaries. In: CVPR (2016)
20.
Zurück zum Zitat Li, Y., Paluri, M., Rehg, J.M., Dollár, P.: Unsupervised learning of edges. In: CVPR (2016) Li, Y., Paluri, M., Rehg, J.M., Dollár, P.: Unsupervised learning of edges. In: CVPR (2016)
21.
Zurück zum Zitat Yang, J., Price, B., Cohen, S., Lee, H., Yang, M.H.: Object contour detection with a fully convolutional encoder-decoder network. In: CVPR (2016) Yang, J., Price, B., Cohen, S., Lee, H., Yang, M.H.: Object contour detection with a fully convolutional encoder-decoder network. In: CVPR (2016)
22.
Zurück zum Zitat Najman, L., Schmitt, M.: Geodesic saliency of watershed contours and hierarchical segmentation. TPAMI 18(12), 1163–1173 (1996)CrossRef Najman, L., Schmitt, M.: Geodesic saliency of watershed contours and hierarchical segmentation. TPAMI 18(12), 1163–1173 (1996)CrossRef
23.
24.
Zurück zum Zitat Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: ICCV (2013) Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: ICCV (2013)
25.
Zurück zum Zitat Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding (2014). arXiv preprint arXiv:1408.5093 Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding (2014). arXiv preprint arXiv:​1408.​5093
26.
Zurück zum Zitat Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. TPAMI 26(5), 530–549 (2004)CrossRef Martin, D., Fowlkes, C., Malik, J.: Learning to detect natural image boundaries using local brightness, color, and texture cues. TPAMI 26(5), 530–549 (2004)CrossRef
27.
Zurück zum Zitat Pont-Tuset, J., Marques, F.: Supervised evaluation of image segmentation and object proposal techniques. TPAMI 38(7), 1465–1478 (2016)CrossRef Pont-Tuset, J., Marques, F.: Supervised evaluation of image segmentation and object proposal techniques. TPAMI 38(7), 1465–1478 (2016)CrossRef
28.
Zurück zum Zitat Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: CVPR (2013) Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: CVPR (2013)
29.
Zurück zum Zitat Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: CVPR (2013) Ren, Z., Shakhnarovich, G.: Image segmentation by cascaded region agglomeration. In: CVPR (2013)
30.
Zurück zum Zitat Shi, J., Malik, J.: Normalized cuts and image segmentation. TPAMI 22(8), 888–905 (2000)CrossRef Shi, J., Malik, J.: Normalized cuts and image segmentation. TPAMI 22(8), 888–905 (2000)CrossRef
31.
Zurück zum Zitat Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. IJCV 59, 167–181 (2004) Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient graph-based image segmentation. IJCV 59, 167–181 (2004)
32.
Zurück zum Zitat Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. TPAMI 24(5), 603–619 (2002)CrossRef Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. TPAMI 24(5), 603–619 (2002)CrossRef
33.
Zurück zum Zitat Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR (2014) Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR (2014)
34.
Zurück zum Zitat Girshick, R.: Fast R-CNN. In: ICCV. (2015) Girshick, R.: Fast R-CNN. In: ICCV. (2015)
35.
Zurück zum Zitat Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with regionproposal networks.In: NIPS (2015) Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with regionproposal networks.In: NIPS (2015)
36.
Zurück zum Zitat Humayun, A., Li, F., Rehg, J.M.: The middle child problem: revisiting parametric min-cut and seeds for object proposals. In: ICCV (2015) Humayun, A., Li, F., Rehg, J.M.: The middle child problem: revisiting parametric min-cut and seeds for object proposals. In: ICCV (2015)
37.
Zurück zum Zitat Krähenbühl, P., Koltun, V.: Learning to propose objects. In: CVPR (2015) Krähenbühl, P., Koltun, V.: Learning to propose objects. In: CVPR (2015)
38.
Zurück zum Zitat Krähenbühl, P., Koltun, V.: Geodesic Object Proposals. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 725–739. Springer, Heidelberg (2014) Krähenbühl, P., Koltun, V.: Geodesic Object Proposals. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part V. LNCS, vol. 8693, pp. 725–739. Springer, Heidelberg (2014)
39.
Zurück zum Zitat Uijlings, J.R.R., Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. IJCV 104(2), 154–171 (2013)CrossRef Uijlings, J.R.R., Sande, K.E.A., Gevers, T., Smeulders, A.W.M.: Selective search for object recognition. IJCV 104(2), 154–171 (2013)CrossRef
40.
Zurück zum Zitat Rantalankila, P., Kannala, J., Rahtu, E.: Generating object segmentation proposals using global and local search. In: CVPR (2014) Rantalankila, P., Kannala, J., Rahtu, E.: Generating object segmentation proposals using global and local search. In: CVPR (2014)
41.
Zurück zum Zitat Humayun, A., Li, F., Rehg, J.M.: RIGOR: Recycling Inference in Graph Cuts for generating Object Regions. In: CVPR (2014) Humayun, A., Li, F., Rehg, J.M.: RIGOR: Recycling Inference in Graph Cuts for generating Object Regions. In: CVPR (2014)
42.
Zurück zum Zitat Hosang, J., Benenson, R., Dollár, P., Schiele, B.: What makes for effective detection proposals? TPAMI 38(4), 814–830 (2016)CrossRef Hosang, J., Benenson, R., Dollár, P., Schiele, B.: What makes for effective detection proposals? TPAMI 38(4), 814–830 (2016)CrossRef
43.
Zurück zum Zitat Pont-Tuset, J., Van Gool, L.: Boosting object proposals: From Pascal to COCO. In: ICCV (2015) Pont-Tuset, J., Van Gool, L.: Boosting object proposals: From Pascal to COCO. In: ICCV (2015)
44.
Zurück zum Zitat Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context (2014). arXiv:1405.0312 Lin, T., Maire, M., Belongie, S., Bourdev, L.D., Girshick, R.B., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft COCO: common objects in context (2014). arXiv:​1405.​0312
Metadaten
Titel
Convolutional Oriented Boundaries
verfasst von
Kevis-Kokitsi Maninis
Jordi Pont-Tuset
Pablo Arbeláez
Luc Van Gool
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-46448-0_35

Premium Partner