Skip to main content
Erschienen in: International Journal of Computer Vision 4/2024

25.10.2023

FlowNAS: Neural Architecture Search for Optical Flow Estimation

verfasst von: Zhiwei Lin, Tingting Liang, Taihong Xiao, Yongtao Wang, Ming-Hsuan Yang

Erschienen in: International Journal of Computer Vision | Ausgabe 4/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recent optical flow estimators usually employ deep models designed for image classification as the encoders for feature extraction and matching. However, those encoders developed for image classification may be sub-optimal for flow estimation. In contrast, the decoder design of optical flow estimators often requires meticulous design for flow estimation. The disconnect between the encoder and decoder could negatively affect optical flow estimation. To address this issue, we propose a neural architecture search method, FlowNAS, to automatically find the more suitable and stronger encoder architecture for existing flow decoders. We first design a suitable search space, including various convolutional operators, and construct a weight-sharing super-network for efficiently evaluating the candidate architectures. To better train the super-network, we present a Feature Alignment Distillation module that utilizes a well-trained flow estimator to guide the training of the super-network. Finally, a resource-constrained evolutionary algorithm is exploited to determine an optimal architecture (i.e., sub-network). Experimental results show that FlowNAS can be easily incorporated into existing flow estimators and achieves state-of-the-art performance with the trade-off between accuracy and efficiency. Furthermore, the encoder architecture discovered by FlowNAS with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI, an 8.4% reduction of RAFT baseline, surpassing state-of-the-art handcrafted GMA and AGFlow models, while reducing the model complexity and latency. The source code and trained models will be released at https://​github.​com/​VDIGPKU/​FlowNAS.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Bailer, C., Taetz, B., Stricker, D. (2015). Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In IEEE International Conference on Computer Vision (ICCV), pp 4015–4023 Bailer, C., Taetz, B., Stricker, D. (2015). Flow fields: Dense correspondence fields for highly accurate large displacement optical flow estimation. In IEEE International Conference on Computer Vision (ICCV), pp 4015–4023
Zurück zum Zitat Bender, G., Kindermans, P., Zoph, B., Vasudevan, V., Le, Q. V. (2018). Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning (ICML) Bender, G., Kindermans, P., Zoph, B., Vasudevan, V., Le, Q. V. (2018). Understanding and simplifying one-shot architecture search. In International Conference on Machine Learning (ICML)
Zurück zum Zitat Biswas, B., Kr Ghosh, S., Hore, M., & Ghosh, A. (2022). Sift-based visual tracking using optical flow and belief propagation algorithm. The Computer Journal, 65(1), 1–17.MathSciNetCrossRef Biswas, B., Kr Ghosh, S., Hore, M., & Ghosh, A. (2022). Sift-based visual tracking using optical flow and belief propagation algorithm. The Computer Journal, 65(1), 1–17.MathSciNetCrossRef
Zurück zum Zitat Brock, A., Lim, T., Ritchie, J. M., Weston, N. (2018). SMASH: one-shot model architecture search through hypernetworks. In International Conference on Learning Representations (ICLR) Brock, A., Lim, T., Ritchie, J. M., Weston, N. (2018). SMASH: one-shot model architecture search through hypernetworks. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Butler, D. J., Wulff, J., Stanley, G. B., Black, M. J. (2012) A naturalistic open source movie for optical flow evaluation. In European Conference on Computer Vision (ECCV), pp 611–625 Butler, D. J., Wulff, J., Stanley, G. B., Black, M. J. (2012) A naturalistic open source movie for optical flow evaluation. In European Conference on Computer Vision (ECCV), pp 611–625
Zurück zum Zitat Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S. (2020). Once-for-all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations (ICLR) Cai, H., Gan, C., Wang, T., Zhang, Z., Han, S. (2020). Once-for-all: Train one network and specialize it for efficient deployment. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Cai, H., Zhu, L., Han, S. (2019). Proxylessnas: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations (ICLR) Cai, H., Zhu, L., Han, S. (2019). Proxylessnas: Direct neural architecture search on target task and hardware. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Chen, Y., Guo, Y., Chen, Q., Li, M., Zeng, W., Wang, Y., Tan, M. (2021). Contrastive neural architecture search with neural architecture comparators. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Chen, Y., Guo, Y., Chen, Q., Li, M., Zeng, W., Wang, Y., Tan, M. (2021). Contrastive neural architecture search with neural architecture comparators. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Cheng, X., Zhong, Y., Harandi, M., Dai, Y., Chang, X., Li, H., Drummond, T., Ge, Z. (2020). Hierarchical neural architecture search for deep stereo matching. In Neural Information Processing Systems (NeurIPS) Cheng, X., Zhong, Y., Harandi, M., Dai, Y., Chang, X., Li, H., Drummond, T., Ge, Z. (2020). Hierarchical neural architecture search for deep stereo matching. In Neural Information Processing Systems (NeurIPS)
Zurück zum Zitat Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1800–1807 Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1800–1807
Zurück zum Zitat Chu, X., Zhang, B., Xu, R., Li, J. (2021). Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. IEEE International Conference on Computer Vision (ICCV) Chu, X., Zhang, B., Xu, R., Li, J. (2021). Fairnas: Rethinking evaluation fairness of weight sharing neural architecture search. IEEE International Conference on Computer Vision (ICCV)
Zurück zum Zitat Chu, X., Zhou, T., Zhang, B., Li, J. (2020). Fair DARTS: eliminating unfair advantages in differentiable architecture search. In European Conference on Computer Vision (ECCV) Chu, X., Zhou, T., Zhang, B., Li, J. (2020). Fair DARTS: eliminating unfair advantages in differentiable architecture search. In European Conference on Computer Vision (ECCV)
Zurück zum Zitat de Jong, D., Paredes-Vallés, F., de Croon, G. (5555). How do neural networks estimate optical flow a neuropsychology-inspired study. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) pp 1–1 de Jong, D., Paredes-Vallés, F., de Croon, G. (5555). How do neural networks estimate optical flow a neuropsychology-inspired study. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) pp 1–1
Zurück zum Zitat Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., Brox, T. (2015). Flownet: Learning optical flow with convolutional networks. In IEEE International Conference on Computer Vision (ICCV), pp 2758–2766 Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., Brox, T. (2015). Flownet: Learning optical flow with convolutional networks. In IEEE International Conference on Computer Vision (ICCV), pp 2758–2766
Zurück zum Zitat Fortun, D., Bouthemy, P., & Kervrann, C. (2015). Optical flow modeling and computation: A survey. Computer Vision and Image Understanding (CVIU), 134, 1–21.CrossRef Fortun, D., Bouthemy, P., & Kervrann, C. (2015). Optical flow modeling and computation: A survey. Computer Vision and Image Understanding (CVIU), 134, 1–21.CrossRef
Zurück zum Zitat Gao, S., Huang, F., Cai, W., Huang, H. (2021). Network pruning via performance maximization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Gao, S., Huang, F., Cai, W., Huang, H. (2021). Network pruning via performance maximization. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231–1237.CrossRef Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231–1237.CrossRef
Zurück zum Zitat Gou, J., Yu, B., Maybank, S. J., Tao, D. (2021). Knowledge distillation: A survey. International Journal on Computer Vision (IJCV) Gou, J., Yu, B., Maybank, S. J., Tao, D. (2021). Knowledge distillation: A survey. International Journal on Computer Vision (IJCV)
Zurück zum Zitat Guo, Y., Zheng, Y., Tan, M., Chen, Q., Li, Z., Chen, J., Zhao, P., Huang, J. (2022). Towards accurate and compact architectures via neural architecture transformer. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) Guo, Y., Zheng, Y., Tan, M., Chen, Q., Li, Z., Chen, J., Zhao, P., Huang, J. (2022). Towards accurate and compact architectures via neural architecture transformer. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI)
Zurück zum Zitat Guo, Z., Zhang, X., Mu, H., Heng, W., Liu, Z., Wei, Y., Sun, J. (2020). Single path one-shot neural architecture search with uniform sampling. In: European Conference on Computer Vision (ECCV) Guo, Z., Zhang, X., Mu, H., Heng, W., Liu, Z., Wei, Y., Sun, J. (2020). Single path one-shot neural architecture search with uniform sampling. In: European Conference on Computer Vision (ECCV)
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778 He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 770–778
Zurück zum Zitat Hui, T. W., Tang, X., Loy, C. C. (2019). A lightweight optical flow cnn–revisiting data fidelity and regularization. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) Hui, T. W., Tang, X., Loy, C. C. (2019). A lightweight optical flow cnn–revisiting data fidelity and regularization. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI)
Zurück zum Zitat Hur, J., Roth, S. (2019). Iterative residual refinement for joint optical flow and occlusion estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5754–5763 Hur, J., Roth, S. (2019). Iterative residual refinement for joint optical flow and occlusion estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 5754–5763
Zurück zum Zitat Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2462–2470 Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 2462–2470
Zurück zum Zitat Jiang, S., Campbell, D., Lu, Y., Li, H., Hartley, R. (2021a). Learning to estimate hidden motions with global motion aggregation. In IEEE International Conference on Computer Vision (ICCV), pp 9772–9781 Jiang, S., Campbell, D., Lu, Y., Li, H., Hartley, R. (2021a). Learning to estimate hidden motions with global motion aggregation. In IEEE International Conference on Computer Vision (ICCV), pp 9772–9781
Zurück zum Zitat Jiang, S., Lu, Y., Li, H., Hartley, R. (2021b). Learning optical flow from a few matches. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 16592–16600 Jiang, S., Lu, Y., Li, H., Hartley, R. (2021b). Learning optical flow from a few matches. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 16592–16600
Zurück zum Zitat Kondermann, D., Nair, R., Honauer, K., Krispin, K., Andrulis, J., Brock, A., Gussefeld, B., Rahimimoghaddam, M., Hofmann, S., Brenner, C., et al. (2016). The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 19–28 Kondermann, D., Nair, R., Honauer, K., Krispin, K., Andrulis, J., Brock, A., Gussefeld, B., Rahimimoghaddam, M., Hofmann, S., Brenner, C., et al. (2016). The hci benchmark suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 19–28
Zurück zum Zitat Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images Krizhevsky, A., Hinton, G., et al. (2009). Learning multiple layers of features from tiny images
Zurück zum Zitat Li, C., Peng, J., Yuan, L., Wang, G., Liang, X., Lin, L., Chang, X. (2020a). Block-wisely supervised neural architecture search with knowledge distillation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Li, C., Peng, J., Yuan, L., Wang, G., Liang, X., Lin, L., Chang, X. (2020a). Block-wisely supervised neural architecture search with knowledge distillation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Li, R., Tan, R. T., Cheong, L. (2020b). All in one bad weather removal using architectural search. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3172–3182 Li, R., Tan, R. T., Cheong, L. (2020b). All in one bad weather removal using architectural search. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3172–3182
Zurück zum Zitat Liang, T., Wang, Y., Tang, Z., Hu, G., Ling, H. (2021). OPANAS: one-shot path aggregation network architecture search for object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10195–10203 Liang, T., Wang, Y., Tang, Z., Hu, G., Ling, H. (2021). OPANAS: one-shot path aggregation network architecture search for object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10195–10203
Zurück zum Zitat Liu, C., Chen, L., Schroff, F., Adam, H., Hua, W., Yuille, A. L., Fei-Fei, L. (2019a). Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 82–92 Liu, C., Chen, L., Schroff, F., Adam, H., Hua, W., Yuille, A. L., Fei-Fei, L. (2019a). Auto-deeplab: Hierarchical neural architecture search for semantic image segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 82–92
Zurück zum Zitat Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L., Fei-Fei, L., Yuille, A. L., Huang, J., & Murphy, K. (2018). Progressive neural architecture search. European Conference on Computer Vision (ECCV), 11205, 19–35. Liu, C., Zoph, B., Neumann, M., Shlens, J., Hua, W., Li, L., Fei-Fei, L., Yuille, A. L., Huang, J., & Murphy, K. (2018). Progressive neural architecture search. European Conference on Computer Vision (ECCV), 11205, 19–35.
Zurück zum Zitat Liu, H., Simonyan, K., Vinyals, O., Fernando, C., Kavukcuoglu, K. (2018b). Hierarchical representations for efficient architecture search. In International Conference on Learning Representations (ICLR) Liu, H., Simonyan, K., Vinyals, O., Fernando, C., Kavukcuoglu, K. (2018b). Hierarchical representations for efficient architecture search. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Liu ,H., Simonyan, K., Yang, Y. (2019b). DARTS: differentiable architecture search. In International Conference on Learning Representations (ICLR) Liu ,H., Simonyan, K., Yang, Y. (2019b). DARTS: differentiable architecture search. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Liu, J., Zhuang, B., Zhuang, Z., Guo, Y., Huang, J., Zhu, J., Tan, M. (2022). Discrimination-aware network pruning for deep model compression. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI) Liu, J., Zhuang, B., Zhuang, Z., Guo, Y., Huang, J., Zhu, J., Tan, M. (2022). Discrimination-aware network pruning for deep model compression. IEEE Transactions on Pattern Recognition and Machine Intelligence (PAMI)
Zurück zum Zitat Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z. (2021). Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10561–10570 Liu, R., Ma, L., Zhang, J., Fan, X., Luo, Z. (2021). Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10561–10570
Zurück zum Zitat Luo, A., Yang, F., Luo, K., Li, X., Fan, H., Liu, S. (2022), Learning optical flow with adaptive graph reasoning. In Association for the Advancement of Artificial Intelligence (AAAI) Luo, A., Yang, F., Luo, K., Li, X., Fan, H., Liu, S. (2022), Learning optical flow with adaptive graph reasoning. In Association for the Advancement of Artificial Intelligence (AAAI)
Zurück zum Zitat Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., Brox, T. (2016). A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4040–4048 Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., Brox, T. (2016). A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4040–4048
Zurück zum Zitat Menze, M., Geiger, A. (2015). Object scene flow for autonomous vehicles. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3061–3070 Menze, M., Geiger, A. (2015). Object scene flow for autonomous vehicles. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3061–3070
Zurück zum Zitat Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J. (2018). Efficient neural architecture search via parameters sharing. In International Conference on Machine Learning (ICML) Pham, H., Guan, M., Zoph, B., Le, Q., Dean, J. (2018). Efficient neural architecture search via parameters sharing. In International Conference on Machine Learning (ICML)
Zurück zum Zitat Ranjan, A., Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4161–4170 Ranjan, A., Black, M. J. (2017). Optical flow estimation using a spatial pyramid network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4161–4170
Zurück zum Zitat Real, E., Aggarwal, A., Huang, Y., Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Association for the Advancement of Artificial Intelligence (AAAI), pp 4780–4789 Real, E., Aggarwal, A., Huang, Y., Le, Q. V. (2019). Regularized evolution for image classifier architecture search. In Association for the Advancement of Artificial Intelligence (AAAI), pp 4780–4789
Zurück zum Zitat Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., Bengio, Y. (2015). Fitnets: Hints for thin deep nets. In International Conference on Learning Representations (ICLR) Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., Bengio, Y. (2015). Fitnets: Hints for thin deep nets. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Saikia, T., Marrakchi, Y., Zela, A., Hutter, F., Brox, T. (2019). Autodispnet: Improving disparity estimation with automl. In IEEE International Conference on Computer Vision (ICCV) Saikia, T., Marrakchi, Y., Zela, A., Hutter, F., Brox, T. (2019). Autodispnet: Improving disparity estimation with automl. In IEEE International Conference on Computer Vision (ICCV)
Zurück zum Zitat Schuster, R., Bailer, C., Wasenmüller, O., Stricker, D. (2018). Flowfields++: Accurate optical flow correspondences meet robust interpolation. In IEEE International Conference on Image Processing (ICIP), pp 1463–1467 Schuster, R., Bailer, C., Wasenmüller, O., Stricker, D. (2018). Flowfields++: Accurate optical flow correspondences meet robust interpolation. In IEEE International Conference on Image Processing (ICIP), pp 1463–1467
Zurück zum Zitat Sun, D., Yang, X., Liu, MY., Kautz, J. (2018a). Models matter, so does training: An empirical study of cnns for optical flow estimation. arXiv preprint arXiv:1809.05571 Sun, D., Yang, X., Liu, MY., Kautz, J. (2018a). Models matter, so does training: An empirical study of cnns for optical flow estimation. arXiv preprint arXiv:​1809.​05571
Zurück zum Zitat Sun, D., Yang, X., Liu, M. Y., Kautz, J. (2018b). Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 8934–8943 Sun, D., Yang, X., Liu, M. Y., Kautz, J. (2018b). Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 8934–8943
Zurück zum Zitat Sun, S., Kuang, Z., Sheng, L., Ouyang, W., Zhang, W. (2018c). Optical flow guided feature: A fast and robust motion representation for video action recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Sun, S., Kuang, Z., Sheng, L., Ouyang, W., Zhang, W. (2018c). Optical flow guided feature: A fast and robust motion representation for video action recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Tan, C., Li, C., He, D., Song, H. (2022). Towards real-time tracking and counting of seedlings with a one-stage detector and optical flow. Computers and Electronics in Agriculture p 106683 Tan, C., Li, C., He, D., Song, H. (2022). Towards real-time tracking and counting of seedlings with a one-stage detector and optical flow. Computers and Electronics in Agriculture p 106683
Zurück zum Zitat Tan, M., Pang, R., Le, Q. V. (2020). Efficientdet: Scalable and efficient object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10778–10787 Tan, M., Pang, R., Le, Q. V. (2020). Efficientdet: Scalable and efficient object detection. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 10778–10787
Zurück zum Zitat Teed, Z., Deng, J. (2020). RAFT: recurrent all-pairs field transforms for optical flow. In Vedaldi A, Bischof H, Brox T, Frahm J (eds) European Conference on Computer Vision (ECCV), pp 402–419 Teed, Z., Deng, J. (2020). RAFT: recurrent all-pairs field transforms for optical flow. In Vedaldi A, Bischof H, Brox T, Frahm J (eds) European Conference on Computer Vision (ECCV), pp 402–419
Zurück zum Zitat Wang, D., Li, M., Gong, C., Chandra, V. (2021). Attentivenas: Improving neural architecture search via attentive sampling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6418–6427 Wang, D., Li, M., Gong, C., Chandra, V. (2021). Attentivenas: Improving neural architecture search via attentive sampling. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6418–6427
Zurück zum Zitat Wang, X., Girshick, R. B., Gupta, He, K. (2018). Non-local neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 7794–7803 Wang, X., Girshick, R. B., Gupta, He, K. (2018). Non-local neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 7794–7803
Zurück zum Zitat Wulff, J., Sevilla-Lara, L., Black, M. J. (2017). Optical flow in mostly rigid scenes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4671–4680 Wulff, J., Sevilla-Lara, L., Black, M. J. (2017). Optical flow in mostly rigid scenes. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 4671–4680
Zurück zum Zitat Xiao, T., Yuan, J., Sun, D., Wang, Q., Zhang, X. Y., Xu, K., Yang, M. H. (2020). Learnable cost volume using the cayley representation. In European Conference on Computer Vision (ECCV), pp 483–499 Xiao, T., Yuan, J., Sun, D., Wang, Q., Zhang, X. Y., Xu, K., Yang, M. H. (2020). Learnable cost volume using the cayley representation. In European Conference on Computer Vision (ECCV), pp 483–499
Zurück zum Zitat Xie, S., et al. RBG (2017). Aggregated residual transformations for deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Xie, S., et al. RBG (2017). Aggregated residual transformations for deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Xu, H., Zhang, J., Cai, J., Rezatofighi, H., Tao, D. (2022). Gmflow: Learning optical flow via global matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 8121–8130 Xu, H., Zhang, J., Cai, J., Rezatofighi, H., Tao, D. (2022). Gmflow: Learning optical flow via global matching. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 8121–8130
Zurück zum Zitat Xu J, Ranftl, R., Koltun, V. (2017). Accurate optical flow via direct cost volume processing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1289–1297 Xu J, Ranftl, R., Koltun, V. (2017). Accurate optical flow via direct cost volume processing. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1289–1297
Zurück zum Zitat Xu, Y., Wang, Y., Han, K., Tang, Y., Jui, S., Xu, C., Xu, C. (2021). Renas: Relativistic evaluation of neural architecture search. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Xu, Y., Wang, Y., Han, K., Tang, Y., Jui, S., Xu, C., Xu, C. (2021). Renas: Relativistic evaluation of neural architecture search. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Zurück zum Zitat Yang, G., Ramanan, D. (2019). Volumetric correspondence networks for optical flow. In Neural Information Processing Systems (NeurIPS), pp 793–803 Yang, G., Ramanan, D. (2019). Volumetric correspondence networks for optical flow. In Neural Information Processing Systems (NeurIPS), pp 793–803
Zurück zum Zitat Yang, Z., Li, Z., Shao, M., Shi, D., Yuan, Z., Yuan, C. (2022). Masked generative distillation. In European Conference on Computer Vision (ECCV) Yang, Z., Li, Z., Shao, M., Shi, D., Yuan, Z., Yuan, C. (2022). Masked generative distillation. In European Conference on Computer Vision (ECCV)
Zurück zum Zitat Yin, Z., Darrell, T., Yu, F. (2019). Hierarchical discrete distribution decomposition for match density estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6044–6053 Yin, Z., Darrell, T., Yu, F. (2019). Hierarchical discrete distribution decomposition for match density estimation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6044–6053
Zurück zum Zitat Yu, J., Jin, P., Liu, H., Bender, G., Kindermans, P., Tan, M., Huang, T. S., Song, X., Pang, R., & Le, Q. (2020). Bignas: Scaling up neural architecture search with big single-stage models. European Conference on Computer Vision (ECCV), 12352, 702–717. Yu, J., Jin, P., Liu, H., Bender, G., Kindermans, P., Tan, M., Huang, T. S., Song, X., Pang, R., & Le, Q. (2020). Bignas: Scaling up neural architecture search with big single-stage models. European Conference on Computer Vision (ECCV), 12352, 702–717.
Zurück zum Zitat Yu, J., Yang, L., Xu, N., Yang, J., Huang, T. S. (2019). Slimmable neural networks. In International Conference on Learning Representations (ICLR) Yu, J., Yang, L., Xu, N., Yang, J., Huang, T. S. (2019). Slimmable neural networks. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Yuan, F., Shou, L., Pei, J., Lin, W., Gong, M., Fu, Y., Jiang, D. (2021). Reinforced multi-teacher selection for knowledge distillation. In Association for the Advancement of Artificial Intelligence (AAAI) Yuan, F., Shou, L., Pei, J., Lin, W., Gong, M., Fu, Y., Jiang, D. (2021). Reinforced multi-teacher selection for knowledge distillation. In Association for the Advancement of Artificial Intelligence (AAAI)
Zurück zum Zitat Zagoruyko, S., Komodakis, N. (2017). Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International Conference on Learning Representations (ICLR) Zagoruyko, S., Komodakis, N. (2017). Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer. In International Conference on Learning Representations (ICLR)
Zurück zum Zitat Zhang, F., Woodford, O. J., Prisacariu, V. A., Torr, P. H. (2021). Separable flow: Learning motion cost volumes for optical flow estimation. In IEEE International Conference on Computer Vision (ICCV), pp 10807–10817 Zhang, F., Woodford, O. J., Prisacariu, V. A., Torr, P. H. (2021). Separable flow: Learning motion cost volumes for optical flow estimation. In IEEE International Conference on Computer Vision (ICCV), pp 10807–10817
Zurück zum Zitat Zhang, H., Li, Y., Chen, H., Shen, C. (2020). Memory-efficient hierarchical neural architecture search for image denoising. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3654–3663 Zhang, H., Li, Y., Chen, H., Shen, C. (2020). Memory-efficient hierarchical neural architecture search for image denoising. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3654–3663
Zurück zum Zitat Zhang, X., Zhou, X., Lin, M., Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6848–6856 Zhang, X., Zhou, X., Lin, M., Sun, J. (2018). Shufflenet: An extremely efficient convolutional neural network for mobile devices. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6848–6856
Zurück zum Zitat Zhang, Y., Qiu, Z., Liu, J., Yao, T., Liu, D., Mei, T. (2019). Customizable architecture search for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 11641–11650 Zhang, Y., Qiu, Z., Liu, J., Yao, T., Liu, D., Mei, T. (2019). Customizable architecture search for semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 11641–11650
Zurück zum Zitat Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J. (2017). Pyramid scene parsing network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6230–6239 Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J. (2017). Pyramid scene parsing network. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6230–6239
Zurück zum Zitat Zhao, S., Sheng, Y., Dong, Y., Chang, E. I., Xu, Y., et al. (2020). Maskflownet: Asymmetric feature matching with learnable occlusion mask. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6278–6287 Zhao, S., Sheng, Y., Dong, Y., Chang, E. I., Xu, Y., et al. (2020). Maskflownet: Asymmetric feature matching with learnable occlusion mask. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 6278–6287
Zurück zum Zitat Zoph, B., Le, Q. V. (2017). Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR) Zoph, B., Le, Q. V. (2017). Neural architecture search with reinforcement learning. In International Conference on Learning Representations (ICLR)
Metadaten
Titel
FlowNAS: Neural Architecture Search for Optical Flow Estimation
verfasst von
Zhiwei Lin
Tingting Liang
Taihong Xiao
Yongtao Wang
Ming-Hsuan Yang
Publikationsdatum
25.10.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 4/2024
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01920-9

Weitere Artikel der Ausgabe 4/2024

International Journal of Computer Vision 4/2024 Zur Ausgabe

Premium Partner