Skip to main content
Top
Published in: International Journal of Computer Vision 10/2023

11-06-2023 | Manuscript

A2B: Anchor to Barycentric Coordinate for Robust Correspondence

Authors: Weiyue Zhao, Hao Lu, Zhiguo Cao, Xin Li

Published in: International Journal of Computer Vision | Issue 10/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

There is a long-standing problem of repeated patterns in correspondence problems, where mismatches frequently occur because of inherent ambiguity. The unique position information associated with repeated patterns makes coordinate representations a useful supplement to appearance representations for improving feature correspondences. However, the issue of appropriate coordinate representation has remained unresolved. In this study, we demonstrate that geometric-invariant coordinate representations, such as barycentric coordinates, can significantly reduce mismatches between features. The first step is to establish a theoretical foundation for geometrically invariant coordinates. We present a seed matching and filtering network (SMFNet) that combines feature matching and consistency filtering with a coarse-to-fine matching strategy in order to acquire reliable sparse correspondences. We then introduce Degree, a novel anchor-to-barycentric (A2B) coordinate encoding approach, which generates multiple affine-invariant correspondence coordinates from paired images. Degree can be used as a plug-in with standard descriptors, feature matchers, and consistency filters to improve the matching quality. Extensive experiments in synthesized indoor and outdoor datasets demonstrate that Degree alleviates the problem of repeated patterns and helps achieve state-of-the-art performance. Furthermore, Degree also reports competitive performance in the third Image Matching Challenge at CVPR 2021. This approach offers a new perspective to alleviate the problem of repeated patterns and emphasizes the importance of choosing coordinate representations for feature correspondences.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
go back to reference Andrew, A.M. (2001). Multiple view geometry in computer vision. Kybernetes Andrew, A.M. (2001). Multiple view geometry in computer vision. Kybernetes
go back to reference Arandjelović, R., & Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2911–2918). Arandjelović, R., & Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2911–2918).
go back to reference Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K. (2017). Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5173–5182) Balntas, V., Lenc, K., Vedaldi, A., Mikolajczyk, K. (2017). Hpatches: A benchmark and evaluation of handcrafted and learned local descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5173–5182)
go back to reference Barath, D., Matas, J., Noskova, J. (2019). MAGSAC: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Barath, D., Matas, J., Noskova, J. (2019). MAGSAC: marginalizing sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
go back to reference Baumberg, A. (2000). Reliable feature matching across widely separated views. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 774–781), IEEE. Baumberg, A. (2000). Reliable feature matching across widely separated views. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 774–781), IEEE.
go back to reference Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M. (2017). Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4181–4190). Bian, J., Lin, W.Y., Matsushita, Y., Yeung, S.K., Nguyen, T.D., Cheng, M.M. (2017). Gms: Grid-based motion statistics for fast, ultra-robust feature correspondence. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4181–4190).
go back to reference Cavalli, L., Larsson, V., Oswald, M. R., Sattler, T., & Pollefeys, M. (2020). Handcrafted outlier detection revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–787). Springer. Cavalli, L., Larsson, V., Oswald, M. R., Sattler, T., & Pollefeys, M. (2020). Handcrafted outlier detection revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–787). Springer.
go back to reference Chen, H., Luo, Z., Zhang, J., Zhou, L., Bai, X., Hu, Z., Tai, C.L., Quan, L. (2021). Learning to match features with seeded graph matching network. In: Proceedings of the IEEE conference on computer vision (pp. 6301–6310). Chen, H., Luo, Z., Zhang, J., Zhou, L., Bai, X., Hu, Z., Tai, C.L., Quan, L. (2021). Learning to match features with seeded graph matching network. In: Proceedings of the IEEE conference on computer vision (pp. 6301–6310).
go back to reference Chum, O., Matas, J. (2005). Matching with prosac-progressive sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 220–226). IEEE. Chum, O., Matas, J. (2005). Matching with prosac-progressive sample consensus. In: Proceedings of the IEEE conference on computer vision and pattern recognition, vol 1, (pp. 220–226). IEEE.
go back to reference Chum, O., Matas, J., Kittler, J. (2003) Locally optimized ransac. In: pattern recognition (pp. 236–243). Springer. Chum, O., Matas, J., Kittler, J. (2003) Locally optimized ransac. In: pattern recognition (pp. 236–243). Springer.
go back to reference Chum, O., Werner, T., & Matas, J. (2005). Two-view geometry estimation unaffected by a dominant plane. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1,(pp. 772–779). Chum, O., Werner, T., & Matas, J. (2005). Two-view geometry estimation unaffected by a dominant plane. In: Proceedings of the IEEE conference on computer vision and pattern recognition 1,(pp. 772–779).
go back to reference Cuturi, M. (2013). Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in neural information processing systems, vol 2, (p. 4). Cuturi, M. (2013). Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in neural information processing systems, vol 2, (p. 4).
go back to reference Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828–5839). Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5828–5839).
go back to reference DeTone, D., Malisiewicz, T., Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition Workshop (pp. 224–236) DeTone, D., Malisiewicz, T., Rabinovich, A. (2018). Superpoint: Self-supervised interest point detection and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition Workshop (pp. 224–236)
go back to reference Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T. (2019). D2-Net: A Trainable CNN for Joint Detection and Description of Local Features. In: Proceedings of the IEEE conference on computer vision and pattern recognition Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T. (2019). D2-Net: A Trainable CNN for Joint Detection and Description of Local Features. In: Proceedings of the IEEE conference on computer vision and pattern recognition
go back to reference Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRef Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRef
go back to reference Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. International conference on machine learning (pp. 1243–1252). PMLR. Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017). Convolutional sequence to sequence learning. International conference on machine learning (pp. 1243–1252). PMLR.
go back to reference He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
go back to reference Heinly, J., Schonberger, J.L., Dunn, E., Frahm, J.M. (2015) Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 3287–3295). Heinly, J., Schonberger, J.L., Dunn, E., Frahm, J.M. (2015) Reconstructing the world* in six days*(as captured by the yahoo 100 million image dataset). In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 3287–3295).
go back to reference Jiang, W., Trulls, E., Hosang, J., Tagliasacchi, A., Yi, K.M. (2021). Cotr: Correspondence transformer for matching across images. In: Proceedings of the IEEE conference on computer vision and pattern recognition Jiang, W., Trulls, E., Hosang, J., Tagliasacchi, A., Yi, K.M. (2021). Cotr: Correspondence transformer for matching across images. In: Proceedings of the IEEE conference on computer vision and pattern recognition
go back to reference Jin, Y., Mishkin, D., Mishchuk, A., Matas, J., Fua, P., Yi, KM., Trulls, E. (2020). Image matching across wide baselines: From paper to practice. International Journal of Computer Vision( pp. 1–31). Jin, Y., Mishkin, D., Mishchuk, A., Matas, J., Fua, P., Yi, KM., Trulls, E. (2020). Image matching across wide baselines: From paper to practice. International Journal of Computer Vision( pp. 1–31).
go back to reference Li, J., Hu, Q., & Ai, M. (2019). Lam: Locality affine-invariant feature matching. ISPRS Journal of Photogrammetry and Remote Sensing, 154, 28–40.CrossRef Li, J., Hu, Q., & Ai, M. (2019). Lam: Locality affine-invariant feature matching. ISPRS Journal of Photogrammetry and Remote Sensing, 154, 28–40.CrossRef
go back to reference Li, Z., Snavely, N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2041–2050). Li, Z., Snavely, N. (2018). Megadepth: Learning single-view depth prediction from internet photos. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2041–2050).
go back to reference Lin, W. Y. D., Cheng, M. M., Lu, J., Yang, H., Do, M. N., & Torr, P. (2014). Bilateral functions for global motion modeling. Proceedings of the European conference on computer vision (pp. 341–356). Springer. Lin, W. Y. D., Cheng, M. M., Lu, J., Yang, H., Do, M. N., & Torr, P. (2014). Bilateral functions for global motion modeling. Proceedings of the European conference on computer vision (pp. 341–356). Springer.
go back to reference Liu, Y., Shen, Z., Lin, Z., Peng, S., Bao, H., Zhou, X. (2019). Gift: Learning transformation-invariant dense visual descriptors via group cnns. arXiv Comput Res Repository Liu, Y., Shen, Z., Lin, Z., Peng, S., Bao, H., Zhou, X. (2019). Gift: Learning transformation-invariant dense visual descriptors via group cnns. arXiv Comput Res Repository
go back to reference Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.CrossRef
go back to reference Lu, J., Liong, V. E., & Zhou, J. (2017). Deep hashing for scalable image search. IEEE Transactions on Image Processing, 26(5), 2352–2367.MathSciNetCrossRefMATH Lu, J., Liong, V. E., & Zhou, J. (2017). Deep hashing for scalable image search. IEEE Transactions on Image Processing, 26(5), 2352–2367.MathSciNetCrossRefMATH
go back to reference Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L. (2019). Contextdesc: Local descriptor augmentation with cross-modality context. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2527–2536) Luo, Z., Shen, T., Zhou, L., Zhang, J., Yao, Y., Li, S., Fang, T., Quan, L. (2019). Contextdesc: Local descriptor augmentation with cross-modality context. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2527–2536)
go back to reference Ma, J., Zhao, J., Tian, J., Yuille, A. L., & Tu, Z. (2014). Robust point matching via vector field consensus. IEEE Transactions on Image Processing, 23(4), 1706–1721.MathSciNetCrossRefMATH Ma, J., Zhao, J., Tian, J., Yuille, A. L., & Tu, Z. (2014). Robust point matching via vector field consensus. IEEE Transactions on Image Processing, 23(4), 1706–1721.MathSciNetCrossRefMATH
go back to reference Ma, J., Zhao, J., Jiang, J., Zhou, H., & Guo, X. (2019). Locality preserving matching. International Journal of Computer Vision, 127(5), 512–531.MathSciNetCrossRefMATH Ma, J., Zhao, J., Jiang, J., Zhou, H., & Guo, X. (2019). Locality preserving matching. International Journal of Computer Vision, 127(5), 512–531.MathSciNetCrossRefMATH
go back to reference Marschner, S., & Shirley, P. (2018). Fundamentals of computer graphics. Boca Raton: CRC Press.CrossRefMATH Marschner, S., & Shirley, P. (2018). Fundamentals of computer graphics. Boca Raton: CRC Press.CrossRefMATH
go back to reference Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.CrossRef Mikolajczyk, K., & Schmid, C. (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), 1615–1630.CrossRef
go back to reference Mishchuk, A., Mishkin, D., Radenovic, F., Matas, J. (2017). Working hard to know your neighbor’s margins: Local descriptor learning loss. arXiv Comput Res Repository Mishchuk, A., Mishkin, D., Radenovic, F., Matas, J. (2017). Working hard to know your neighbor’s margins: Local descriptor learning loss. arXiv Comput Res Repository
go back to reference Mishkin, D., Radenovic, F., Matas, J. (2018). Repeatability is not enough: Learning affine regions via discriminability. In: Proceedings of the European conference on computer vision (pp. 284–300). Mishkin, D., Radenovic, F., Matas, J. (2018). Repeatability is not enough: Learning affine regions via discriminability. In: Proceedings of the European conference on computer vision (pp. 284–300).
go back to reference Morel, J. M., & Yu, G. (2009). Asift: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2), 438–469.MathSciNetCrossRefMATH Morel, J. M., & Yu, G. (2009). Asift: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2), 438–469.MathSciNetCrossRefMATH
go back to reference Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). Orb-slam: A versatile and accurate monocular slam system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef
go back to reference Peter, S., Jakob, U., Ashish, V. (2018). Self-attention with relative position representations. In: NAACL-HLT ( pp. 464–468). Peter, S., Jakob, U., Ashish, V. (2018). Self-attention with relative position representations. In: NAACL-HLT ( pp. 464–468).
go back to reference Pultar, M. (2020). Improving the hardnet descriptor. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Pultar, M. (2020). Improving the hardnet descriptor. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
go back to reference Ranftl, R., Koltun, V. (2018). Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (pp. 284–299). Ranftl, R., Koltun, V. (2018). Deep fundamental matrix estimation. In: Proceedings of the European conference on computer vision (pp. 284–299).
go back to reference Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J. (2018). Neighbourhood consensus networks. In: Advances in neural information processing systems 31. Rocco, I., Cimpoi, M., Arandjelović, R., Torii, A., Pajdla, T., Sivic, J. (2018). Neighbourhood consensus networks. In: Advances in neural information processing systems 31.
go back to reference Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. Proceedings of the European conference on computer vision (pp. 430–443). Springer. Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. Proceedings of the European conference on computer vision (pp. 430–443). Springer.
go back to reference Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. Proceedings of the IEEE conference on computer vision (pp. 2564–2571). IEEE. Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. Proceedings of the IEEE conference on computer vision (pp. 2564–2571). IEEE.
go back to reference Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4938–4947). Sarlin, P.E., DeTone, D., Malisiewicz, T., Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4938–4947).
go back to reference Schonberger, J.L., Frahm, J.M. (2016). Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4104–4113). Schonberger, J.L., Frahm, J.M. (2016). Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4104–4113).
go back to reference Shi, Y., Cai, J.X., Shavit, Y., Mu, T.J., Feng, W., Zhang, K. (2022) Clustergnn: Cluster-based coarse-to-fine graph neural network for efficient feature matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 12517–12526). Shi, Y., Cai, J.X., Shavit, Y., Mu, T.J., Feng, W., Zhang, K. (2022) Clustergnn: Cluster-based coarse-to-fine graph neural network for efficient feature matching. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 12517–12526).
go back to reference Sinkhorn, R., & Knopp, P. (1967). Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2), 343–348.MathSciNetCrossRefMATH Sinkhorn, R., & Knopp, P. (1967). Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2), 343–348.MathSciNetCrossRefMATH
go back to reference Sun, D., Roth, S., Lewis, J., & Black, M. J. (2008). Learning optical flow. Proceedings of the European conference on computer vision (pp. 83–97). Springer. Sun, D., Roth, S., Lewis, J., & Black, M. J. (2008). Learning optical flow. Proceedings of the European conference on computer vision (pp. 83–97). Springer.
go back to reference Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X. (2021). Loftr: Detector-free local feature matching with transformers. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8922–8931). Sun, J., Shen, Z., Wang, Y., Bao, H., Zhou, X. (2021). Loftr: Detector-free local feature matching with transformers. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8922–8931).
go back to reference Sun, W., Jiang, W., Trulls, E., Tagliasacchi, A., Yi, K.M. (2020). Acne: Attentive context normalization for robust permutation-equivariant learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11286–11295) Sun, W., Jiang, W., Trulls, E., Tagliasacchi, A., Yi, K.M. (2020). Acne: Attentive context normalization for robust permutation-equivariant learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11286–11295)
go back to reference Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., & Li, L. J. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef Thomee, B., Shamma, D. A., Friedland, G., Elizalde, B., Ni, K., Poland, D., Borth, D., & Li, L. J. (2016). Yfcc100m: The new data in multimedia research. Communications of the ACM, 59(2), 64–73.CrossRef
go back to reference Tyszkiewicz, M.J., Fua, P., Trulls, E. (2020). Disk: Learning local features with policy gradient. In: Proceedings of the IEEE conference on computer vision and pattern recognition. Tyszkiewicz, M.J., Fua, P., Trulls, E. (2020). Disk: Learning local features with policy gradient. In: Proceedings of the IEEE conference on computer vision and pattern recognition.
go back to reference Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. arXiv Comput Res Repository Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I. (2017). Attention is all you need. arXiv Comput Res Repository
go back to reference Vedaldi, A., Fulkerson, B. (2010). Vlfeat: An open and portable library of computer vision algorithms. In: Proceedings of the ACM international conference on Multimedia (pp. 1469–1472). Vedaldi, A., Fulkerson, B. (2010). Vlfeat: An open and portable library of computer vision algorithms. In: Proceedings of the ACM international conference on Multimedia (pp. 1469–1472).
go back to reference Wang, Y., Solomon, J.M. (2019). Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE conference on computer vision (pp. 3523–3532). Wang, Y., Solomon, J.M. (2019). Deep closest point: Learning representations for point cloud registration. In: Proceedings of the IEEE conference on computer vision (pp. 3523–3532).
go back to reference Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5), 1–12.CrossRef Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics, 38(5), 1–12.CrossRef
go back to reference Wu, C. (2011). Visualsfm: A visual structure from motion system. http://wwwcswashingtonedu/homes/ccwu/vsfm Wu, C. (2011). Visualsfm: A visual structure from motion system. http://​wwwcswashingtone​du/​homes/​ccwu/​vsfm
go back to reference Wu, K., Peng, H., Chen, M., Fu, J., Chao, H. (2021). Rethinking and improving relative position encoding for vision transformer. In: Proceedings of the IEEE conference on computer vision. Wu, K., Peng, H., Chen, M., Fu, J., Chao, H. (2021). Rethinking and improving relative position encoding for vision transformer. In: Proceedings of the IEEE conference on computer vision.
go back to reference Xiao, J., Owens, A., Torralba, A. (2013). Sun3d: A database of big spaces reconstructed using sfm and object labels. In: Proceedings of the IEEE conference on computer vision (pp. 1625–1632). Xiao, J., Owens, A., Torralba, A. (2013). Sun3d: A database of big spaces reconstructed using sfm and object labels. In: Proceedings of the IEEE conference on computer vision (pp. 1625–1632).
go back to reference Yi, K.M., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P. (2018). Learning to find good correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2666–2674). Yi, K.M., Trulls, E., Ono, Y., Lepetit, V., Salzmann, M., Fua, P. (2018). Learning to find good correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2666–2674).
go back to reference Zhang, J., Sun, D., Luo, Z., Yao, A., Zhou, L., Shen, T., Chen, Y., Quan, L., Liao, H. (2019). Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE conference on computer vision (pp. 5845–5854) Zhang, J., Sun, D., Luo, Z., Yao, A., Zhou, L., Shen, T., Chen, Y., Quan, L., Liao, H. (2019). Learning two-view correspondences and geometry using order-aware network. In: Proceedings of the IEEE conference on computer vision (pp. 5845–5854)
go back to reference Zhao, C., Cao, Z., Li, C., Li, X., Yang, J. (2019). Nm-net: Mining reliable neighbors for robust feature correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 215–224). Zhao, C., Cao, Z., Li, C., Li, X., Yang, J. (2019). Nm-net: Mining reliable neighbors for robust feature correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 215–224).
go back to reference Zhao, C., Cao, Z., Yang, J., Xian, K., & Li, X. (2020). Image feature correspondence selection: A comparative study and a new contribution. IEEE Transactions on Image Processing, 29, 3506–3519. Zhao, C., Cao, Z., Yang, J., Xian, K., & Li, X. (2020). Image feature correspondence selection: A comparative study and a new contribution. IEEE Transactions on Image Processing, 29, 3506–3519.
go back to reference Zhou, Q., Sattler, T., Leal-Taixe, L. (2021). Patch2pix: Epipolar-guided pixel-level correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4669–4678). Zhou, Q., Sattler, T., Leal-Taixe, L. (2021). Patch2pix: Epipolar-guided pixel-level correspondences. In: Proceedings of the IEEE conference on computer vision and pattern recognition (pp 4669–4678).
Metadata
Title
A2B: Anchor to Barycentric Coordinate for Robust Correspondence
Authors
Weiyue Zhao
Hao Lu
Zhiguo Cao
Xin Li
Publication date
11-06-2023
Publisher
Springer US
Published in
International Journal of Computer Vision / Issue 10/2023
Print ISSN: 0920-5691
Electronic ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01827-5

Other articles of this Issue 10/2023

International Journal of Computer Vision 10/2023 Go to the issue

S.I. : Computer Vision Approach for Animal Tracking and Modeling

DOVE: Learning Deformable 3D Objects by Watching Videos

Premium Partner