Skip to main content
Erschienen in: International Journal of Computer Vision 10/2023

11.06.2023 | Manuscript

A2B: Anchor to Barycentric Coordinate for Robust Correspondence

verfasst von: Weiyue Zhao, Hao Lu, Zhiguo Cao, Xin Li

Erschienen in: International Journal of Computer Vision | Ausgabe 10/2023

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

There is a long-standing problem of repeated patterns in correspondence problems, where mismatches frequently occur because of inherent ambiguity. The unique position information associated with repeated patterns makes coordinate representations a useful supplement to appearance representations for improving feature correspondences. However, the issue of appropriate coordinate representation has remained unresolved. In this study, we demonstrate that geometric-invariant coordinate representations, such as barycentric coordinates, can significantly reduce mismatches between features. The first step is to establish a theoretical foundation for geometrically invariant coordinates. We present a seed matching and filtering network (SMFNet) that combines feature matching and consistency filtering with a coarse-to-fine matching strategy in order to acquire reliable sparse correspondences. We then introduce Degree, a novel anchor-to-barycentric (A2B) coordinate encoding approach, which generates multiple affine-invariant correspondence coordinates from paired images. Degree can be used as a plug-in with standard descriptors, feature matchers, and consistency filters to improve the matching quality. Extensive experiments in synthesized indoor and outdoor datasets demonstrate that Degree alleviates the problem of repeated patterns and helps achieve state-of-the-art performance. Furthermore, Degree also reports competitive performance in the third Image Matching Challenge at CVPR 2021. This approach offers a new perspective to alleviate the problem of repeated patterns and emphasizes the importance of choosing coordinate representations for feature correspondences.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Andrew, A.M. (2001). Multiple view geometry in computer vision. Kybernetes Andrew, A.M. (2001). Multiple view geometry in computer vision. Kybernetes
Zurück zum Zitat Arandjelović, R., & Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2911–2918). Arandjelović, R., & Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2911–2918).
Zurück zum Zitat Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K. (2017). Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5173–5182) Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K. (2017). Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5173–5182)
Zurück zum Zitat Barath, D., Matas, J., Noskova, J. (2019). MAGSAC: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Barath, D., Matas, J., Noskova, J. (2019). MAGSAC: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
Zurück zum Zitat Baumberg, A. (2000). Reliable feature matching across widely separated views. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 774–781), IEEE. Baumberg, A. (2000). Reliable feature matching across widely separated views. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 774–781), IEEE.
Zurück zum Zitat Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M. (2017). Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4181–4190). Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M. (2017). Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4181–4190).
Zurück zum Zitat Cavalli, L., Larsson, V., Oswald, M. R., Sattler, T., & Pollefeys, M. (2020). Handcrafted outlier detection revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–787). Springer. Cavalli, L., Larsson, V., Oswald, M. R., Sattler, T., & Pollefeys, M. (2020). Handcrafted outlier detection revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–787). Springer.
Zurück zum Zitat Chen, H., Luo, Z., Zhang, J., Zhou, L., Bai, X., Hu, Z., Tai, C.L., Quan, L. (2021). Learning to match features with seeded graph matching network. In: Proceedings of the IEEE conference on computer vision (pp. 6301–6310). Chen, H., Luo, Z., Zhang, J., Zhou, L., Bai, X., Hu, Z., Tai, C.L., Quan, L. (2021). Learning to match features with seeded graph matching network. In: Proceedings of the IEEE conference on computer vision (pp. 6301–6310).
Zurück zum Zitat Chum, O., Matas, J. (2005). Matching with prosac-progressive sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 220–226). IEEE. Chum, O., Matas, J. (2005). Matching with prosac-progressive sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 220–226). IEEE.
Zurück zum Zitat Chum, O., Matas, J., Kittler, J. (2003) Locally optimized ransac. In: pattern recognition (pp. 236–243). Springer. Chum, O., Matas, J., Kittler, J. (2003) Locally optimized ransac. In: pattern recognition (pp. 236–243). Springer.
Zurück zum Zitat Chum, O., Werner, T., & Matas, J. (2005). Two-view geometry estimation unaffected by a dominant plane. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1,(pp. 772–779). Chum, O., Werner, T., & Matas, J. (2005). Two-view geometry estimation unaffected by a dominant plane. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1,(pp. 772–779).
Zurück zum Zitat Cuturi, M. (2013). Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in neural information processing systems, vol 2, (p. 4). Cuturi, M. (2013). Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in neural information processing systems, vol 2, (p. 4).
Zurück zum Zitat Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828–5839). Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828–5839).
Zurück zum Zitat DeTone, D., Malisiewicz, T., Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition Workshop (pp. 224–236) DeTone, D., Malisiewicz, T., Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition Workshop (pp. 224–236)
Zurück zum Zitat Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T. (2019). D2-Net: A Trainable CNN for Joint Detection and Description of Local Features. In: Proceedings of the IEEE conference on computer vision and pattern recognition Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T. (2019). D2-Net: A Trainable CNN for Joint Detection and Description of Local Features. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Zurück zum Zitat Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRef Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRef
Zurück zum Zitat Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. International conference on machine learning (pp. 1243–1252). PMLR. Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. International conference on machine learning (pp. 1243–1252). PMLR.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
Zurück zum Zitat Heinly, J., Schonberger, J.L., Dunn, E., Frahm, J.M. (2015) Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 3287–3295). Heinly, J., Schonberger, J.L., Dunn, E., Frahm, J.M. (2015) Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 3287–3295).
Zurück zum Zitat Jiang, W., Trulls, E., Hosang, J., Tagliasacchi, A., Yi, K.M. (2021). Cotr: Correspondence transformer for matching across images. In: Proceedings of the IEEE conference on computer vision and pattern recognition Jiang, W., Trulls, E., Hosang, J., Tagliasacchi, A., Yi, K.M. (2021). Cotr: Correspondence transformer for matching across images. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Zurück zum Zitat Jin, Y., Mishkin, D., Mishchuk, A., Matas, J., Fua, P., Yi, KM., Trulls, E. (2020). Image matching across wide baselines: From paper to practice. International Journal of Computer Vision( pp. 1–31). Jin, Y., Mishkin, D., Mishchuk, A., Matas, J., Fua, P., Yi, KM., Trulls, E. (2020). Image matching across wide baselines: From paper to practice. International Journal of Computer Vision( pp. 1–31).
Zurück zum Zitat Li, J., Hu, Q., & Ai, M. (2019). Lam: Locality affine-invariant feature matching. ISPRS Journal of Photogrammetry and Remote Sensing, 154, 28–40.CrossRef Li, J., Hu, Q., & Ai, M. (2019). Lam: Locality affine-invariant feature matching. ISPRS Journal of Photogrammetry and Remote Sensing, 154, 28–40.CrossRef
Zurück zum Zitat Li, Z., Snavely, N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2041–2050). Li, Z., Snavely, N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2041–2050).
Zurück zum Zitat Lin, W. Y. D., Cheng, M. M., Lu, J., Yang, H., Do, M. N., & Torr, P. (2014). Bilateral functions for global motion modeling. Proceedings of the European conference on computer vision (pp. 341–356). Springer. Lin, W. Y. D., Cheng, M. M., Lu, J., Yang, H., Do, M. N., & Torr, P. (2014). Bilateral functions for global motion modeling. Proceedings of the European conference on computer vision (pp. 341–356). Springer.
Zurück zum Zitat Liu, Y., Shen, Z., Lin, Z., Peng, S., Bao, H., Zhou, X. (2019). Gift: Learning transformation-invariant dense visual descriptors via group cnns. arXiv Comput Res Repository Liu, Y., Shen, Z., Lin, Z., Peng, S., Bao, H., Zhou, X. (2019). Gift: Learning transformation-invariant dense visual descriptors via group cnns. arXiv Comput Res Repository
Zurück zum Zitat Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef
Zurück zum Zitat Lu, J., Liong, V. E., & Zhou, J. (2017). Deep hashing for scalable image search. IEEE Transactions on Image Processing, 26(5), 2352–2367.MathSciNetCrossRefMATH Lu, J., Liong, V. E., & Zhou, J. (2017). Deep hashing for scalable image search. IEEE Transactions on Image Processing, 26(5), 2352–2367.MathSciNetCrossRefMATH
Zurück zum Zitat Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L. (2019). Contextdesc: Local descriptor augmentation with cross-modality context. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2527–2536) Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L. (2019). Contextdesc: Local descriptor augmentation with cross-modality context. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2527–2536)
Zurück zum Zitat Ma, J., Zhao, J., Tian, J., Yuille, A. L., & Tu, Z. (2014). Robust point matching via vector field consensus. IEEE Transactions on Image Processing, 23(4), 1706–1721.MathSciNetCrossRefMATH Ma, J., Zhao, J., Tian, J., Yuille, A. L., & Tu, Z. (2014). Robust point matching via vector field consensus. IEEE Transactions on Image Processing, 23(4), 1706–1721.MathSciNetCrossRefMATH
Zurück zum Zitat Ma, J., Zhao, J., Jiang, J., Zhou, H., & Guo, X. (2019). Locality preserving matching. International Journal of Computer Vision, 127(5), 512–531.MathSciNetCrossRefMATH Ma, J., Zhao, J., Jiang, J., Zhou, H., & Guo, X. (2019). Locality preserving matching. International Journal of Computer Vision, 127(5), 512–531.MathSciNetCrossRefMATH
Zurück zum Zitat Marschner, S., & Shirley, P. (2018). Fundamentals of computer graphics. Boca Raton: CRC Press.CrossRefMATH Marschner, S., & Shirley, P. (2018). Fundamentals of computer graphics. Boca Raton: CRC Press.CrossRefMATH
Zurück zum Zitat Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.CrossRef Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.CrossRef
Zurück zum Zitat Mishchuk, A., Mishkin, D., Radenovic, F., Matas, J. (2017). Working hard to know your neighbor’s margins: Local descriptor learning loss. arXiv Comput Res Repository Mishchuk, A., Mishkin, D., Radenovic, F., Matas, J. (2017). Working hard to know your neighbor’s margins: Local descriptor learning loss. arXiv Comput Res Repository
Zurück zum Zitat Mishkin, D., Radenovic, F., Matas, J. (2018). Repeatability is not enough: Learning affine regions via discriminability. In: Proceedings of the European conference on computer vision (pp. 284–300). Mishkin, D., Radenovic, F., Matas, J. (2018). Repeatability is not enough: Learning affine regions via discriminability. In: Proceedings of the European conference on computer vision (pp. 284–300).
Zurück zum Zitat Morel, J. M., & Yu, G. (2009). Asift: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2), 438–469.MathSciNetCrossRefMATH Morel, J. M., & Yu, G. (2009). Asift: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2), 438–469.MathSciNetCrossRefMATH
Zurück zum Zitat Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef
Zurück zum Zitat Peter, S., Jakob, U., Ashish, V. (2018). Self-attention with relative position representations. In: NAACL-HLT ( pp. 464–468). Peter, S., Jakob, U., Ashish, V. (2018). Self-attention with relative position representations. In: NAACL-HLT ( pp. 464–468).
Zurück zum Zitat Pultar, M. (2020). Improving the hardnet descriptor. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Pultar, M. (2020). Improving the hardnet descriptor. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
Zurück zum Zitat Ranftl, R., Koltun, V. (2018). Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (pp. 284–299). Ranftl, R., Koltun, V. (2018). Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (pp. 284–299).
Zurück zum Zitat Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J. (2018). Neighbourhood consensus networks. In: Advances in neural information processing systems 31. Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J. (2018). Neighbourhood consensus networks. In: Advances in neural information processing systems 31.
Zurück zum Zitat Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. Proceedings of the European conference on computer vision (pp. 430–443). Springer. Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. Proceedings of the European conference on computer vision (pp. 430–443). Springer.
Zurück zum Zitat Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. Proceedings of the IEEE conference on computer vision (pp. 2564–2571). IEEE. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. Proceedings of the IEEE conference on computer vision (pp. 2564–2571). IEEE.
Zurück zum Zitat Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4938–4947). Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4938–4947).
Zurück zum Zitat Schonberger, J.L., Frahm, J.M. (2016). Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4104–4113). Schonberger, J.L., Frahm, J.M. (2016). Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4104–4113).
Zurück zum Zitat Shi, Y., Cai, J.X., Shavit, Y., Mu, T.J., Feng, W., Zhang, K. (2022) Clustergnn: Cluster-based coarse-to-fine graph neural network for efficient feature matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 12517–12526). Shi, Y., Cai, J.X., Shavit, Y., Mu, T.J., Feng, W., Zhang, K. (2022) Clustergnn: Cluster-based coarse-to-fine graph neural network for efficient feature matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 12517–12526).
Zurück zum Zitat Sinkhorn, R., & Knopp, P. (1967). Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2), 343–348.MathSciNetCrossRefMATH Sinkhorn, R., & Knopp, P. (1967). Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2), 343–348.MathSciNetCrossRefMATH
Zurück zum Zitat Sun, D., Roth, S., Lewis, J., & Black, M. J. (2008). Learning optical flow. Proceedings of the European conference on computer vision (pp. 83–97). Springer. Sun, D., Roth, S., Lewis, J., & Black, M. J. (2008). Learning optical flow. Proceedings of the European conference on computer vision (pp. 83–97). Springer.
Zurück zum Zitat Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X. (2021). Loftr: Detector-free local feature matching with transformers. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8922–8931). Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X. (2021). Loftr: Detector-free local feature matching with transformers. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8922–8931).
Zurück zum Zitat Sun, W., Jiang, W., Trulls, E., Tagliasacchi, A., Yi, K.M. (2020). Acne: Attentive context normalization for robust permutation-equivariant learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11286–11295) Sun, W., Jiang, W., Trulls, E., Tagliasacchi, A., Yi, K.M. (2020). Acne: Attentive context normalization for robust permutation-equivariant learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11286–11295)
Zurück zum Zitat Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., & Li, L. J. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., & Li, L. J. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef
Zurück zum Zitat Tyszkiewicz, M.J., Fua, P., Trulls, E. (2020). Disk: Learning local features with policy gradient. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Tyszkiewicz, M.J., Fua, P., Trulls, E. (2020). Disk: Learning local features with policy gradient. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
Zurück zum Zitat Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. arXiv Comput Res Repository Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. arXiv Comput Res Repository
Zurück zum Zitat Vedaldi, A., Fulkerson, B. (2010). Vlfeat: An open and portable library of computer vision algorithms. In: Proceedings of the ACM international conference on Multimedia (pp. 1469–1472). Vedaldi, A., Fulkerson, B. (2010). Vlfeat: An open and portable library of computer vision algorithms. In: Proceedings of the ACM international conference on Multimedia (pp. 1469–1472).
Zurück zum Zitat Wang, Y., Solomon, J.M. (2019). Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE conference on computer vision (pp. 3523–3532). Wang, Y., Solomon, J.M. (2019). Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE conference on computer vision (pp. 3523–3532).
Zurück zum Zitat Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5), 1–12.CrossRef Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5), 1–12.CrossRef
Zurück zum Zitat Wu, C. (2011). Visualsfm: A visual structure from motion system. http://wwwcswashingtonedu/homes/ccwu/vsfm Wu, C. (2011). Visualsfm: A visual structure from motion system. http://​wwwcswashingtone​du/​homes/​ccwu/​vsfm
Zurück zum Zitat Wu, K., Peng, H., Chen, M., Fu, J., Chao, H. (2021). Rethinking and improving relative position encoding for vision transformer. In: Proceedings of the IEEE conference on computer vision. Wu, K., Peng, H., Chen, M., Fu, J., Chao, H. (2021). Rethinking and improving relative position encoding for vision transformer. In: Proceedings of the IEEE conference on computer vision.
Zurück zum Zitat Xiao, J., Owens, A., Torralba, A. (2013). Sun3d: A database of big spaces reconstructed using sfm and object labels. In: Proceedings of the IEEE conference on computer vision (pp. 1625–1632). Xiao, J., Owens, A., Torralba, A. (2013). Sun3d: A database of big spaces reconstructed using sfm and object labels. In: Proceedings of the IEEE conference on computer vision (pp. 1625–1632).
Zurück zum Zitat Yi, K.M., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P. (2018). Learning to find good correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2666–2674). Yi, K.M., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P. (2018). Learning to find good correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2666–2674).
Zurück zum Zitat Zhang, J., Sun, D., Luo, Z., Yao, A., Zhou, L., Shen, T., Chen, Y., Quan, L., Liao, H. (2019). Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE conference on computer vision (pp. 5845–5854) Zhang, J., Sun, D., Luo, Z., Yao, A., Zhou, L., Shen, T., Chen, Y., Quan, L., Liao, H. (2019). Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE conference on computer vision (pp. 5845–5854)
Zurück zum Zitat Zhao, C., Cao, Z., Li, C., Li, X., Yang, J. (2019). Nm-net: Mining reliable neighbors for robust feature correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 215–224). Zhao, C., Cao, Z., Li, C., Li, X., Yang, J. (2019). Nm-net: Mining reliable neighbors for robust feature correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 215–224).
Zurück zum Zitat Zhao, C., Cao, Z., Yang, J., Xian, K., & Li, X. (2020). Image feature correspondence selection: A comparative study and a new contribution. IEEE Transactions on Image Processing, 29, 3506–3519. Zhao, C., Cao, Z., Yang, J., Xian, K., & Li, X. (2020). Image feature correspondence selection: A comparative study and a new contribution. IEEE Transactions on Image Processing, 29, 3506–3519.
Zurück zum Zitat Zhou, Q., Sattler, T., Leal-Taixe, L. (2021). Patch2pix: Epipolar-guided pixel-level correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4669–4678). Zhou, Q., Sattler, T., Leal-Taixe, L. (2021). Patch2pix: Epipolar-guided pixel-level correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4669–4678).
Metadaten
Titel
A2B: Anchor to Barycentric Coordinate for Robust Correspondence
verfasst von
Weiyue Zhao
Hao Lu
Zhiguo Cao
Xin Li
Publikationsdatum
11.06.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 10/2023
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01827-5

Weitere Artikel der Ausgabe 10/2023

International Journal of Computer Vision 10/2023 Zur Ausgabe

Premium Partner