An efficient method for fully automatic 3D digitization of unknown objects
Introduction
The 3D models of objects are widely used for an increasing number of applications such as industrial applications, entertainment, preservation of important cultural heritage artifacts, and architectural applications. In industrial applications, objects are digitized for inspection tasks, reverse engineering, and replication. Such applications demand high quality and accurate 3D models. The manual 3D digitization process is expensive since it requires a highly trained technician who decides about the different views needed to acquire the object model. The quality of the final result strongly depends, in addition to the complexity of the object shape, on the selected viewpoints and thus on the human expertise. Nowadays, the most developed digitization strategies in industry are based on a teaching approach in which a human operator manually determines one set of poses for the ranging device. The main drawback of this methodology is the influence of the operator's expertise. Moreover, this technique does not fulfill the high level requirement of industrial applications which require reliable, repeatable, and fast programming routines. Therefore, it is necessary to develop an efficient automatic digitization strategy while minimizing the impact of the human factor. In contrast to the manual and the teaching-based scanning, the automatic scanning deploys the most beneficial views for the reconstruction process to achieve the highest possible accuracy and coverage rate with the smallest number of views. Automatic scanning can be decomposed into two phases: the determination of the different poses (position and orientation) for the set sensor-object and the sensor trajectory generation from previous poses to achieve the next optimal location. The automation problem becomes a view planning problem in which one seeks for the Next Best Views (NBVs) to improve an existing model reconstruction and to cover all the object surface. The non-model based NBV planning can be seen as an incremental approach to building object or scene models using all the previously acquired 3D data.
The goal of this work is to automatically generate a complete 3D model of unknown and complex objects by developing an information-driven approach. Therefore, we introduce a novel NBV strategy based on the evolution of the scanned part orientation. Our method enables fast and complete 3D reconstruction while moving efficiently the scanner. By generating a set of potential views, our technique ensures proper avoidance of unreachable configurations. This paper is organized as follows. In the next section we discuss the related work and briefly define the adopted approach for our view planning strategy. In Section 3 we introduce our NBV planning method followed by experimental results in Section 4. We conclude and give an overview of the work in progress in Section 5.
Section snippets
Related work
The challenge of automatic viewpoint selection has been widely studied in the literature. Tarabanis et al. [1], Scott et al. [2] and recently Chen et al. [3] provided complete surveys on the view planning problem. Depending on the nature of reasoning, the methods can be classified into two main approaches: volumetric methods and surface methods.
Proposed method
From this overview we can conclude that there is no system able to digitize autonomously a whole unknown object, in the best cases some of the most advanced solutions are at most partially automated. Hence, the aim is to define a new strategy that leads to automatic selection of viewpoints and decreases the acquisition time and the numbers of readjustments. Most methods of Sensor Planning are based on the visibility approach to define the positions from where the object surface points to be
Setup
The algorithm has been implemented on a robotic cell (see Fig. 7) composed of a fringes projection scanner, CometV, manufactured by Steinbichler Optotechnik GmbH, mounted on a robotic arm KR16, from KUKA Roboter. The system is equipped with a turntable to enable one additional degree of freedom. In our experiments, the scanner field of view is set to 400 mm × 400 mm. The working distance is equal to 850 mm.
Results
Experiments were conducted with a large set of objects with different geometries to prove the
Conclusions
We have presented a new digitization strategy. First, acquired data are analyzed and classified into Well Visible and Barely Visible areas combining the visibility according to the angle value and the ray tracing method. Second, the Barely Visible patches are clustered to identify potential views. This method is characterized by its simplicity and efficiency and leads to very good results in term of surface coverage and time consumption. Furthermore, it takes into account the physical
Acknowledgements
This work was done within the framework of the LE2I laboratory and financially supported by the Regional Council of Burgundy. The authors would like to thank Mr. Mickaël Provost, Head of Vecteo company (www.vecteo.com), for his technical support and effective collaboration.
Souhaiel Khalfaoui received his MSc degree in robotics and intelligent systems from Pierre et Marie Curie University, Paris 6, France, in 2009 and PhD degree in computer vision from the University of Burgundy, Dijon, France, in November 2012. He is currently the research manager of Vecteo SAS company. His research interests are in the fields of machine vision, 3D reconstruction and view planning.
References (31)
- et al.
The global k-means clustering algorithm
Pattern Recognition
(2003) - et al.
A survey of sensor planning in computer vision
IEEE Transactions on Robotics and Automation
(1995) - et al.
View planning for automated three-dimensional object reconstruction and inspection
ACM Computing Surveys
(2003) - et al.
Active Sensor Planning for Multiview Vision Tasks
(2008) Automatic sensor placement
- M.K. Reed, Solid model acquisition from range imagery, Ph.D. thesis, Columbia University,...
- et al.
A best next view selection algorithm incorporating a quality criterion
- et al.
Incorporation of a-priori information in planning the next best view
The determination of next best views
- et al.
A “best-next-view” algorithm for three-dimensional scene reconstruction using range images
A next-best-view system for autonomous 3-D object reconstruction
IEEE Transactions on Systems Man and Cybernetics: Part A
Digital Image Processing
Occlusions as a guide for planning the next view
IEEE Transactions on Pattern Analysis and Machine Intelligence
Planning the next view using the max–min principle
An adaptive hierarchical next-best-view algorithm for 3D reconstruction of indoor scenes
Cited by (43)
Autonomous view planning methods for 3D scanning
2024, Automation in ConstructionDrone-based Volume Estimation in Indoor Environments
2023, IFAC-PapersOnLine3D scanning method for robotized inspection of industrial sealed parts
2023, Computers in IndustryWhat does it look like? An artificial neural network model to predict the physical dense 3D appearance of a large-scale object
2022, Expert Systems with ApplicationsCitation Excerpt :As for the application software incorporated, it depends on the hardware vendors provided. One of the early works (Khalfaoui et al., 2013) that developed the reconstruction system in the past decade suggests the application of range sensor. This enables the surface geometry generation from the point clouds collected can form the 3D shape of an object.
A trajectory planning method for robot scanning system uuuusing mask R-CNN for scanning objects with unknown model
2020, NeurocomputingCitation Excerpt :However, the whole process is also time-consuming and is not fully automated. When the model of the object is unknown, the trajectory planning of the sensor is more challenging. [6] presents a non-model based Next Best Views planning method, which is an incremental approach based on the scanned part 3D data.
The surface edge explorer (SEE): A measurement-direct approach to next best view planning
2024, International Journal of Robotics Research
Souhaiel Khalfaoui received his MSc degree in robotics and intelligent systems from Pierre et Marie Curie University, Paris 6, France, in 2009 and PhD degree in computer vision from the University of Burgundy, Dijon, France, in November 2012. He is currently the research manager of Vecteo SAS company. His research interests are in the fields of machine vision, 3D reconstruction and view planning.
Ralph Seulin received a PhD in computer vision from the University of Burgundy, France, in 2002. He is currently a research engineer at the National Center for Scientific Research (CNRS). His research interests include 3D digitization, view planning and industrial applications.
Yohan Fougerolle received the MSc degree in electrical engineering from the University of Burgundy, Dijon, France, in 2002, and the PhD from the same university in 2005. Since 2007, he is an assistant professor in the department of electrical engineering, at the Technical Institute of Le Creusot, University of Burgundy, France. His research interests include 3D digitization, solid modeling, surface reconstruction, and image processing.
David Fofi is currently a professor at the University of Burgundy, head of the computer vision department of the Le2i UMR CNRS 6306 and coordinator of the Erasmus Mundus Masters in Vision and Robotics (VIBOT). He received an MSc degree in image and signal processing of the University of CergyPontoise/ENSEA in 1997, and a PhD in computer vision from the University of Picardie Jules Verne, in 2001. He has been awarded a research fellowship from the SnT (University of Luxembourg) since 2012. His research interest includes multiple-view geometry, catadioptric vision, projector-camera systems, and structured light. He participated in and led several French and European projects in the field of computer vision (Erasmus Mundus, CNRS, ANR, PHC, etc.). Since 1998, he has published more than 20 papers in international peer-reviewed journals, 2 patents, and more than 50 conference papers.