Skip to main content
Erschienen in: International Journal of Computer Vision 4/2024

11.11.2023

Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images

verfasst von: Nishant Jain, Suryansh Kumar, Luc Van Gool

Erschienen in: International Journal of Computer Vision | Ausgabe 4/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We introduce an improved solution to the neural image-based rendering problem in computer vision. Given a set of images taken from a freely moving camera at train time, the proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time. The key ideas presented in this paper are (i) Recovering accurate camera parameters via a robust pipeline from unposed day-to-day images is equally crucial in neural novel view synthesis problem; (ii) It is rather more practical to model object’s content at different resolutions since dramatic camera motion is highly likely in day-to-day unposed images. To incorporate the key ideas, we leverage the fundamentals of scene rigidity, multi-scale neural scene representation, and single-image depth prediction. Concretely, the proposed approach makes the camera parameters as learnable in a neural fields-based modeling framework. By assuming per view depth prediction is given up to scale, we constrain the relative pose between successive frames. From the relative poses, absolute camera pose estimation is modeled via a graph-neural network-based multiple motion averaging within the multi-scale neural-fields network, leading to a single loss function. Optimizing the introduced loss function provides camera intrinsic, extrinsic, and image rendering from unposed images. We demonstrate, with examples, that for a unified framework to accurately model multiscale neural scene representation from day-to-day acquired unposed multi-view images, it is equally essential to have precise camera-pose estimates within the scene representation framework. Without considering robustness measures in the camera pose estimation pipeline, modeling for multi-scale aliasing artifacts can be counterproductive. We present extensive experiments on several benchmark datasets to demonstrate the suitability of our approach.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Fußnoten
1
For more details and derivations, refer Barron et al. (2021) work.
 
2
With the recent progress in single image depth prediction (SIDP) network, it is quite a reasonable assumption.
 
3
CC-BY\(-\)3.0 license.
 
Literatur
Zurück zum Zitat Aftab, K., Hartley, R., & Trumpf, J. (2014). Generalized weiszfeld algorithms for lq optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(4), 728–745.CrossRef Aftab, K., Hartley, R., & Trumpf, J. (2014). Generalized weiszfeld algorithms for lq optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(4), 728–745.CrossRef
Zurück zum Zitat Agarwal, S., Furukawa, Y., Snavely, N., Simon, I., Curless, B., Seitz, S. M., & Szeliski, R. (2011). Building Rome in a day. Communications of the ACM, 54(10), 105–112.CrossRef Agarwal, S., Furukawa, Y., Snavely, N., Simon, I., Curless, B., Seitz, S. M., & Szeliski, R. (2011). Building Rome in a day. Communications of the ACM, 54(10), 105–112.CrossRef
Zurück zum Zitat Amanatides, J. (1984). Ray tracing with cones. ACM SIGGRAPH Computer Graphics, 18(3), 129–135.CrossRef Amanatides, J. (1984). Ray tracing with cones. ACM SIGGRAPH Computer Graphics, 18(3), 129–135.CrossRef
Zurück zum Zitat Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., & Srinivasan, P. P. (2021). Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5855–5864. Barron, J. T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., & Srinivasan, P. P. (2021). Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5855–5864.
Zurück zum Zitat Bian, W., Wang, Z., Li, K., Bian, J. W., & Prisacariu, V. A. (2022). Nope-nerf: Optimising neural radiance field with no pose prior. arXiv preprint arXiv:2212.07388. Bian, W., Wang, Z., Li, K., Bian, J. W., & Prisacariu, V. A. (2022). Nope-nerf: Optimising neural radiance field with no pose prior. arXiv preprint arXiv:​2212.​07388.
Zurück zum Zitat Chatterjee, A., & Govindu, V. M. (2017). Robust relative rotation averaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 958–972.CrossRef Chatterjee, A., & Govindu, V. M. (2017). Robust relative rotation averaging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 958–972.CrossRef
Zurück zum Zitat Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., & Su, H. (2021). Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 14124–14133. Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., & Su, H. (2021). Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 14124–14133.
Zurück zum Zitat Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser, T., & Niesner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the computer vision and pattern recognition (CVPR). IEEE. Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser, T., & Niesner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the computer vision and pattern recognition (CVPR). IEEE.
Zurück zum Zitat Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., & Dahl, G. E. (2017). Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR. Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O., & Dahl, G. E. (2017). Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263–1272. PMLR.
Zurück zum Zitat Govindu, V. M. (2001). Combining two-view constraints for motion estimation. In CVPR (Vol. 2). IEEE. Govindu, V. M. (2001). Combining two-view constraints for motion estimation. In CVPR (Vol. 2). IEEE.
Zurück zum Zitat Govindu, V. M. (2006). Robustness in motion averaging. In Asian conference on computer vision, pp. 457–466. Springer. Govindu, V. M. (2006). Robustness in motion averaging. In Asian conference on computer vision, pp. 457–466. Springer.
Zurück zum Zitat Govindu, V. M. (2016). Motion averaging in 3d reconstruction problems. In Riemannian computing in computer vision, pp. 145–164. Springer. Govindu, V. M. (2016). Motion averaging in 3d reconstruction problems. In Riemannian computing in computer vision, pp. 145–164. Springer.
Zurück zum Zitat Hartley, R., Aftab, K., & Trumpf, J. (2011). L1 rotation averaging using the Weiszfeld algorithm. In CVPR 2011, pp. 3041–3048. IEEE. Hartley, R., Aftab, K., & Trumpf, J. (2011). L1 rotation averaging using the Weiszfeld algorithm. In CVPR 2011, pp. 3041–3048. IEEE.
Zurück zum Zitat Hartley, R. I. (1997). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580–593.CrossRef Hartley, R. I. (1997). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580–593.CrossRef
Zurück zum Zitat Hartley, R., Trumpf, J., Dai, Y., & Li, H. (2013). Rotation averaging. International Journal of Computer Vision, 103(3), 267–305.MathSciNetCrossRef Hartley, R., Trumpf, J., Dai, Y., & Li, H. (2013). Rotation averaging. International Journal of Computer Vision, 103(3), 267–305.MathSciNetCrossRef
Zurück zum Zitat Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. London: Cambridge University Press. Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. London: Cambridge University Press.
Zurück zum Zitat Jain, N., Kumar, S., & Gool, L.V. (2022). Robustifying the multi-scale representation of neural radiance fields. In 33rd British machine vision conference 2022, BMVC 2022, London, UK, November 21–24, 2022. BMVA Press. Jain, N., Kumar, S., & Gool, L.V. (2022). Robustifying the multi-scale representation of neural radiance fields. In 33rd British machine vision conference 2022, BMVC 2022, London, UK, November 21–24, 2022. BMVA Press.
Zurück zum Zitat Jampani, V., Maninis, K.K., Engelhardt, A., Truong, K., Karpur, A., Sargent, K., Popov, S., Araujo, A., Martin-Brualla, R., Patel, K., Vlasic, D., Ferrari, V., Makadia, A., Liu, C., Li, Y., & Zhou, H. (2023). Navi: Category-agnostic image collections with high-quality 3d shape and pose annotations. In: arXiv preprint. Jampani, V., Maninis, K.K., Engelhardt, A., Truong, K., Karpur, A., Sargent, K., Popov, S., Araujo, A., Martin-Brualla, R., Patel, K., Vlasic, D., Ferrari, V., Makadia, A., Liu, C., Li, Y., & Zhou, H. (2023). Navi: Category-agnostic image collections with high-quality 3d shape and pose annotations. In: arXiv preprint.
Zurück zum Zitat Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., & Park, J. (2021). Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5846–5854. Jeong, Y., Ahn, S., Choy, C., Anandkumar, A., Cho, M., & Park, J. (2021). Self-calibrating neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5846–5854.
Zurück zum Zitat Kaya, B., Kumar, S., Sarno, F., Ferrari, V., & Van Gool, L. (2022). Neural radiance fields approach to deep multi-view photometric stereo. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 1965–1977. Kaya, B., Kumar, S., Sarno, F., Ferrari, V., & Van Gool, L. (2022). Neural radiance fields approach to deep multi-view photometric stereo. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 1965–1977.
Zurück zum Zitat Knapitsch, A., Park, J., Zhou, Q. Y., & Koltun, V. (2017). Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 36(4). Knapitsch, A., Park, J., Zhou, Q. Y., & Koltun, V. (2017). Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics, 36(4).
Zurück zum Zitat Lee, S., Chen, L., Wang, J., Liniger, A., Kumar, S., & Yu, F. (2022). Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields. IEEE Robotics and Automation Letters, 7(4), 12070–12077. Lee, S., Chen, L., Wang, J., Liniger, A., Kumar, S., & Yu, F. (2022). Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields. IEEE Robotics and Automation Letters, 7(4), 12070–12077.
Zurück zum Zitat Li, X., & Ling, H. (2021). Pogo-net: Pose graph optimization with graph neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5895–5905. Li, X., & Ling, H. (2021). Pogo-net: Pose graph optimization with graph neural networks. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5895–5905.
Zurück zum Zitat Lin, C. H., Ma, W. C., Torralba, A., & Lucey, S. (2021). Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5741–5751. Lin, C. H., Ma, W. C., Torralba, A., & Lucey, S. (2021). Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 5741–5751.
Zurück zum Zitat Liu, L., Gu, J., Zaw Lin, K., Chua, T. S., & Theobalt, C. (2020). Neural sparse voxel fields. Advances in Neural Information Processing Systems, 33, 15651–15663. Liu, L., Gu, J., Zaw Lin, K., Chua, T. S., & Theobalt, C. (2020). Neural sparse voxel fields. Advances in Neural Information Processing Systems, 33, 15651–15663.
Zurück zum Zitat Martel, J. N., Lindell, D. B., Lin, C. Z., Chan, E. R., Monteiro, M., & Wetzstein, G. (2021). Acorn: adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics (TOG), 40(4), 1–13.CrossRef Martel, J. N., Lindell, D. B., Lin, C. Z., Chan, E. R., Monteiro, M., & Wetzstein, G. (2021). Acorn: adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics (TOG), 40(4), 1–13.CrossRef
Zurück zum Zitat Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2021). Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1), 99–106.CrossRef Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2021). Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1), 99–106.CrossRef
Zurück zum Zitat Nistér, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 756–770.CrossRef Nistér, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 756–770.CrossRef
Zurück zum Zitat Purkait, P., Chin, T. J., & Reid, I. (2020). Neurora: Neural robust rotation averaging. In European conference on computer vision, pp. 137–154. Springer. Purkait, P., Chin, T. J., & Reid, I. (2020). Neurora: Neural robust rotation averaging. In European conference on computer vision, pp. 137–154. Springer.
Zurück zum Zitat Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., & Courville, A. (2019). On the spectral bias of neural networks. In International conference on machine learning, pp. 5301–5310. PMLR. Rahaman, N., Baratin, A., Arpit, D., Draxler, F., Lin, M., Hamprecht, F., Bengio, Y., & Courville, A. (2019). On the spectral bias of neural networks. In International conference on machine learning, pp. 5301–5310. PMLR.
Zurück zum Zitat Ranftl, R., Bochkovskiy, A., & Koltun, V. (2021). Vision transformers for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 12179–12188. Ranftl, R., Bochkovskiy, A., & Koltun, V. (2021). Vision transformers for dense prediction. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 12179–12188.
Zurück zum Zitat Schonberger, J. L., & Frahm, J. M. (2016). Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4104–4113. Schonberger, J. L., & Frahm, J. M. (2016). Structure-from-motion revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4104–4113.
Zurück zum Zitat Schönberger, J. L., Zheng, E., Pollefeys, M., & Frahm, J. M. (2016). Pixelwise view selection for unstructured multi-view stereo. In European conference on computer vision (ECCV). Schönberger, J. L., Zheng, E., Pollefeys, M., & Frahm, J. M. (2016). Pixelwise view selection for unstructured multi-view stereo. In European conference on computer vision (ECCV).
Zurück zum Zitat Sucar, E., Liu, S., Ortiz, J., & Davison, A. J. (2021). imap: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6229–6238. Sucar, E., Liu, S., Ortiz, J., & Davison, A. J. (2021). imap: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6229–6238.
Zurück zum Zitat Tewari, A., et al. (2022). Advances in neural rendering. In Computer graphics forum, Wiley Online Library, Vol. 41, pp. 703–735. Tewari, A., et al. (2022). Advances in neural rendering. In Computer graphics forum, Wiley Online Library, Vol. 41, pp. 703–735.
Zurück zum Zitat Triggs, B., McLauchlan, P. F., Hartley, R. I., & Fitzgibbon, A. W. (2000). Bundle adjustment—a modern synthesis. In Proceedings of the international workshop on vision algorithms: theory and practice, ICCV’99, pp. 298–372. Springer, London, UK. Triggs, B., McLauchlan, P. F., Hartley, R. I., & Fitzgibbon, A. W. (2000). Bundle adjustment—a modern synthesis. In Proceedings of the international workshop on vision algorithms: theory and practice, ICCV’99, pp. 298–372. Springer, London, UK.
Zurück zum Zitat Wang, Q., Wang, Z., Genova, K., Srinivasan, P. P., Zhou, H., Barron, J. T., Martin-Brualla, R., Snavely, N., & Funkhouser, T. (2021a). Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690–4699. Wang, Q., Wang, Z., Genova, K., Srinivasan, P. P., Zhou, H., Barron, J. T., Martin-Brualla, R., Snavely, N., & Funkhouser, T. (2021a). Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4690–4699.
Zurück zum Zitat Wang, Z., Wu, S., Xie, W., Chen, M., & Prisacariu, V. A. (2021b). Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:2102.07064. Wang, Z., Wu, S., Xie, W., Chen, M., & Prisacariu, V. A. (2021b). Nerf–: Neural radiance fields without known camera parameters. arXiv preprint arXiv:​2102.​07064.
Zurück zum Zitat Xu, Q., Xu, Z., Philip, J., Bi, S., Shu, Z., Sunkavalli, K., & Neumann, U. (2022). Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5438–5448. Xu, Q., Xu, Z., Philip, J., Bi, S., Shu, Z., Sunkavalli, K., & Neumann, U. (2022). Point-nerf: Point-based neural radiance fields. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5438–5448.
Zurück zum Zitat Yang, L., Li, H., Rahim, J. A., Cui, Z., & Tan, P. (2021). End-to-end rotation averaging with multi-source propagation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 11774–11783. Yang, L., Li, H., Rahim, J. A., Cui, Z., & Tan, P. (2021). End-to-end rotation averaging with multi-source propagation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 11774–11783.
Zurück zum Zitat Yao, Y., Luo, Z., Li, S., Fang, T., & Quan, L. (2018). Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European conference on computer vision (ECCV), pp. 767–783. Yao, Y., Luo, Z., Li, S., Fang, T., & Quan, L. (2018). Mvsnet: Depth inference for unstructured multi-view stereo. In Proceedings of the European conference on computer vision (ECCV), pp. 767–783.
Zurück zum Zitat Yen-Chen, L., Florence, P., Barron, J.T., Lin, T. Y., Rodriguez, A., & Isola, P. (2022a). Nerf-supervision: Learning dense object descriptors from neural radiance fields. In 2022 international conference on robotics and automation (ICRA), pp. 6496–6503. IEEE. Yen-Chen, L., Florence, P., Barron, J.T., Lin, T. Y., Rodriguez, A., & Isola, P. (2022a). Nerf-supervision: Learning dense object descriptors from neural radiance fields. In 2022 international conference on robotics and automation (ICRA), pp. 6496–6503. IEEE.
Zurück zum Zitat Yen-Chen, L., Florence, P., Barron, J. T., Lin, T. Y., Rodriguez, A., & Isola, P. (2022b). NeRF-Supervision: Learning dense object descriptors from neural radiance fields. In IEEE conference on robotics and automation (ICRA). Yen-Chen, L., Florence, P., Barron, J. T., Lin, T. Y., Rodriguez, A., & Isola, P. (2022b). NeRF-Supervision: Learning dense object descriptors from neural radiance fields. In IEEE conference on robotics and automation (ICRA).
Zurück zum Zitat Yen-Chen, L., Florence, P., Barron, J. T., Rodriguez, A., Isola, P., & Lin, T. Y. (2021). inerf: Inverting neural radiance fields for pose estimation. In 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1323–1330. IEEE. Yen-Chen, L., Florence, P., Barron, J. T., Rodriguez, A., Isola, P., & Lin, T. Y. (2021). inerf: Inverting neural radiance fields for pose estimation. In 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 1323–1330. IEEE.
Zurück zum Zitat Yu, A., Ye, V., Tancik, M., & Kanazawa, A. (2021). pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4578–4587. Yu, A., Ye, V., Tancik, M., & Kanazawa, A. (2021). pixelnerf: Neural radiance fields from one or few images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4578–4587.
Zurück zum Zitat Zhang, X., Srinivasan, P. P., Deng, B., Debevec, P., Freeman, W. T., & Barron, J. T. (2021). Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics (TOG), 40(6), 1–18.CrossRef Zhang, X., Srinivasan, P. P., Deng, B., Debevec, P., Freeman, W. T., & Barron, J. T. (2021). Nerfactor: Neural factorization of shape and reflectance under an unknown illumination. ACM Transactions on Graphics (TOG), 40(6), 1–18.CrossRef
Metadaten
Titel
Learning Robust Multi-scale Representation for Neural Radiance Fields from Unposed Images
verfasst von
Nishant Jain
Suryansh Kumar
Luc Van Gool
Publikationsdatum
11.11.2023
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 4/2024
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-023-01936-1

Weitere Artikel der Ausgabe 4/2024

International Journal of Computer Vision 4/2024 Zur Ausgabe

Premium Partner