Skip to main content

2019 | OriginalPaper | Buchkapitel

MoA-Net: Self-supervised Motion Segmentation

verfasst von : Pia Bideau, Rakesh R. Menon, Erik Learned-Miller

Erschienen in: Computer Vision – ECCV 2018 Workshops

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Most recent approaches to motion segmentation use optical flow to segment an image into the static environment and independently moving objects. Neural network based approaches usually require large amounts of labeled training data to achieve state-of-the-art performance. In this work we propose a new approach to train a motion segmentation network in a self-supervised manner. Inspired by visual ecology, the human visual system, and by prior approaches to motion modeling, we break down the problem of motion segmentation into two smaller subproblems: (1) modifying the flow field to remove the observer’s rotation and (2) segmenting the rotation-compensated flow into static environment and independently moving objects. Compensating for rotation leads to essential simplifications that allow us to describe an independently moving object with just a few criteria which can be learned by our new motion segmentation network - the Motion Angle Network (MoA-Net). We compare our network with two other motion segmentation networks and show state-of-the-art performance on Sintel.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
This equation only holds if rotation angles are small. However camera rotation is always independent of the scene depth regardless their amount.
 
Literatur
2.
Zurück zum Zitat Bideau, P., RoyChowdhury, A., Menon, R.R., Learned-Miller, E.: The best of both worlds: combining CNNs and geometric constraints for hierarchical motion segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 508–517 (2018) Bideau, P., RoyChowdhury, A., Menon, R.R., Learned-Miller, E.: The best of both worlds: combining CNNs and geometric constraints for hierarchical motion segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 508–517 (2018)
4.
Zurück zum Zitat Brox, T., Malik, J.: Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 500–513 (2011)CrossRef Brox, T., Malik, J.: Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 33(3), 500–513 (2011)CrossRef
5.
Zurück zum Zitat Butler, Daniel J., Wulff, Jonas, Stanley, Garrett B., Black, Michael J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, Andrew, Lazebnik, Svetlana, Perona, Pietro, Sato, Yoichi, Schmid, Cordelia (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33783-3_44CrossRef Butler, Daniel J., Wulff, Jonas, Stanley, Garrett B., Black, Michael J.: A naturalistic open source movie for optical flow evaluation. In: Fitzgibbon, Andrew, Lazebnik, Svetlana, Perona, Pietro, Sato, Yoichi, Schmid, Cordelia (eds.) ECCV 2012. LNCS, vol. 7577, pp. 611–625. Springer, Heidelberg (2012). https://​doi.​org/​10.​1007/​978-3-642-33783-3_​44CrossRef
6.
Zurück zum Zitat Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)CrossRef Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)CrossRef
8.
Zurück zum Zitat Irani, M., Anandan, P.: A unified approach to moving object detection in 2D and 3D scenes. 20(6), 577–589 (1998) Irani, M., Anandan, P.: A unified approach to moving object detection in 2D and 3D scenes. 20(6), 577–589 (1998)
9.
Zurück zum Zitat Jain, S., Xiong, B., Grauman, K.: Fusionseg: learning to combine motion and appearance for fully automatic segmention of generic objects in videos. In: CVPR (2017) Jain, S., Xiong, B., Grauman, K.: Fusionseg: learning to combine motion and appearance for fully automatic segmention of generic objects in videos. In: CVPR (2017)
11.
Zurück zum Zitat Land, M.F.: Motion and vision: why animals move their eyes. J. Comp. Physiol. A 185(4), 341–352 (1999)CrossRef Land, M.F.: Motion and vision: why animals move their eyes. J. Comp. Physiol. A 185(4), 341–352 (1999)CrossRef
12.
Zurück zum Zitat Lappe, M., Hoffmann, K.P., et al.: Optic flow and eye movements. Int. Rev. Neurobiol. 29–50 (2000) Lappe, M., Hoffmann, K.P., et al.: Optic flow and eye movements. Int. Rev. Neurobiol. 29–50 (2000)
13.
Zurück zum Zitat Li, F., Kim, T., Humayun, A., Tsai, D., Rehg, J.M.: Video segmentation by tracking many figure-ground segments. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2192–2199 (2013) Li, F., Kim, T., Humayun, A., Tsai, D., Rehg, J.M.: Video segmentation by tracking many figure-ground segments. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2192–2199 (2013)
14.
Zurück zum Zitat Longuet-Higgins, H.C., Prazdny, K., et al.: The interpretation of a moving retinal image. Proc. R. Soc. Lond. B 208(1173), 385–397 (1980)CrossRef Longuet-Higgins, H.C., Prazdny, K., et al.: The interpretation of a moving retinal image. Proc. R. Soc. Lond. B 208(1173), 385–397 (1980)CrossRef
15.
Zurück zum Zitat Lucas, B.D., Kanade, T., et al.: An iterative image registration technique with an application to stereo vision (1981) Lucas, B.D., Kanade, T., et al.: An iterative image registration technique with an application to stereo vision (1981)
16.
Zurück zum Zitat Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4040–4048 (2016) Mayer, N., et al.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4040–4048 (2016)
17.
Zurück zum Zitat Narayana, M., Hanson, A., Learned-Miller, E.: Coherent motion segmentation in moving camera videos using optical flow orientations, pp. 1577–1584 (2013) Narayana, M., Hanson, A., Learned-Miller, E.: Coherent motion segmentation in moving camera videos using optical flow orientations, pp. 1577–1584 (2013)
18.
Zurück zum Zitat Ochs, P., Malik, J., Brox, T.: Segmentation of moving objects by long term video analysis. IEEE Trans. Pattern Anal. Mach. Intell. 36(6), 1187–1200 (2014)CrossRef Ochs, P., Malik, J., Brox, T.: Segmentation of moving objects by long term video analysis. IEEE Trans. Pattern Anal. Mach. Intell. 36(6), 1187–1200 (2014)CrossRef
19.
Zurück zum Zitat Ogale, A.S., Fermüller, C., Aloimonos, Y.: Motion segmentation using occlusions 27(6), 988–992 (2005) Ogale, A.S., Fermüller, C., Aloimonos, Y.: Motion segmentation using occlusions 27(6), 988–992 (2005)
20.
Zurück zum Zitat Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Computer Vision and Pattern Recognition (2016) Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: A benchmark dataset and evaluation methodology for video object segmentation. In: Computer Vision and Pattern Recognition (2016)
21.
Zurück zum Zitat Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3282–3289. IEEE (2012) Prest, A., Leistner, C., Civera, J., Schmid, C., Ferrari, V.: Learning object class detectors from weakly annotated video. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3282–3289. IEEE (2012)
22.
Zurück zum Zitat Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: edge-preserving interpolation of correspondences for optical flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1164–1172 (2015) Revaud, J., Weinzaepfel, P., Harchaoui, Z., Schmid, C.: EpicFlow: edge-preserving interpolation of correspondences for optical flow. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1164–1172 (2015)
23.
Zurück zum Zitat Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRef Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRef
24.
Zurück zum Zitat Sawhney, H.S., Guo, Y., Kumar, R.: Independent motion detection in 3d scenes. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1191–1199 (2000)CrossRef Sawhney, H.S., Guo, Y., Kumar, R.: Independent motion detection in 3d scenes. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1191–1199 (2000)CrossRef
25.
Zurück zum Zitat Sevilla-Lara, L., Sun, D., Jampani, V., Black, M.J.: Optical flow with semantic segmentation and localized layers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3889–3898 (2016) Sevilla-Lara, L., Sun, D., Jampani, V., Black, M.J.: Optical flow with semantic segmentation and localized layers. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3889–3898 (2016)
26.
Zurück zum Zitat Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2432–2439. IEEE (2010) Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2432–2439. IEEE (2010)
27.
Zurück zum Zitat Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018) Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8934–8943 (2018)
28.
Zurück zum Zitat Tang, K., Sukthankar, R., Yagnik, J., Fei-Fei, L.: Discriminative segment annotation in weakly labeled video. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2483–2490 (2013) Tang, K., Sukthankar, R., Yagnik, J., Fei-Fei, L.: Discriminative segment annotation in weakly labeled video. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2483–2490 (2013)
29.
Zurück zum Zitat Tokmakov, P., Alahari, K., Schmid, C.: Learning motion patterns in videos. In: CVPR (2017) Tokmakov, P., Alahari, K., Schmid, C.: Learning motion patterns in videos. In: CVPR (2017)
30.
Zurück zum Zitat Tokmakov, P., Alahari, K., Schmid, C.: Learning video object segmentation with visual memory. In: ICCV (2017) Tokmakov, P., Alahari, K., Schmid, C.: Learning video object segmentation with visual memory. In: ICCV (2017)
31.
Zurück zum Zitat Torr, P.H.: Geometric motion segmentation and model selection. Philos. Trans. R. Soc. Lond. A: Math. Phys. Eng. Sci. 356(1740), 1321–1340 (1998)MathSciNetCrossRef Torr, P.H.: Geometric motion segmentation and model selection. Philos. Trans. R. Soc. Lond. A: Math. Phys. Eng. Sci. 356(1740), 1321–1340 (1998)MathSciNetCrossRef
32.
Zurück zum Zitat Vijayanarasimhan, S., Ricco, S., Schmid, C., Sukthankar, R., Fragkiadaki, K.: SfM-Net: learning of structure and motion from video. arXiv preprint arXiv:1704.07804 (2017) Vijayanarasimhan, S., Ricco, S., Schmid, C., Sukthankar, R., Fragkiadaki, K.: SfM-Net: learning of structure and motion from video. arXiv preprint arXiv:​1704.​07804 (2017)
33.
Zurück zum Zitat Walls, G.: The evolutionary history of eye movements. Vis. Res. 2(1–4), 69–80 (1962)CrossRef Walls, G.: The evolutionary history of eye movements. Vis. Res. 2(1–4), 69–80 (1962)CrossRef
34.
Zurück zum Zitat Wang, J.Y., Adelson, E.H.: Representing moving images with layers. IEEE Trans. Image Process. 3(5), 625–638 (1994)CrossRef Wang, J.Y., Adelson, E.H.: Representing moving images with layers. IEEE Trans. Image Process. 3(5), 625–638 (1994)CrossRef
35.
36.
Zurück zum Zitat Wulff, J., Sevilla-Lara, L., Black, M.J.: Optical flow in mostly rigid scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017 Wulff, J., Sevilla-Lara, L., Black, M.J.: Optical flow in mostly rigid scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017
37.
Zurück zum Zitat Zamalieva, D., Yilmaz, A.: Background subtraction for the moving camera: a geometric approach 127, 73–85 (2014) Zamalieva, D., Yilmaz, A.: Background subtraction for the moving camera: a geometric approach 127, 73–85 (2014)
Metadaten
Titel
MoA-Net: Self-supervised Motion Segmentation
verfasst von
Pia Bideau
Rakesh R. Menon
Erik Learned-Miller
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-11024-6_55

Premium Partner