Skip to main content
Erschienen in: Journal of Visualization 4/2023

Open Access 06.02.2023 | Regular Paper

Trench visualisation from a semiautonomous excavator with a base grid map using a TOF 2D profilometer

verfasst von: Ilpo Niskanen, Matti Immonen, Tomi Makkonen, Lauri Hallman, Martti Mikkonen, Pekka Keränen, Juha Kostamovaara, Rauno Heikkilä

Erschienen in: Journal of Visualization | Ausgabe 4/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Real-time, three-dimensional (3D) visualisation technology can be used at construction sites to improve the quality of work. A 3D view of the landscape under work can be compared to a target 3D model of the landscape to conveniently show needed excavation tasks to a human excavator operator or to show the progress of an autonomous excavator. The purpose of this study was to demonstrate surface visualisation from measurements taken with a pulsed time-of-flight (TOF) 2D profilometer on-board a semiautonomous excavator. The semiautomatic excavator was implemented by recording the feedback script parameters from the work performed on the excavator by a human driver. 3D visualisation maps based on the triangle mesh technique were generated from the 3D point cloud using measurements of the trenches dug by a human and an autonomous excavator. The accuracy of the 3D maps was evaluated by comparing them to a high-resolution commercial 3D scanner. An analysis of the results shows that the 2D profilometer attached to the excavator can achieve almost the same 3D results as a high-quality on-site static commercial 3D scanner, whilst more easily providing an unobstructed view of the trench during operation (a 3D scanner placed next to a deep trench might not have a full view of the trench). The main technical advantages of our 2D profilometer are its compact size, measurement speed, lack of moving parts, robustness, low-cost technology that enables visualisations from a unique viewpoint on the boom of the excavator, and readiness for real-time control of the excavator’s system. This research is expected to encourage the efficiency of the digging process in the future, as well as to provide a remarkable view of trench work using an excavator as a moving platform to facilitate data visualisation.

Graphical abstract

Hinweise

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s12650-023-00908-4.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Excavators are widely used throughout the world at earthmoving sites and are expected to continue to play an important part in the construction sector in the future, whether in hydraulic or electric forms (Zhang et al. 2017). Earthmoving is a multibillion-dollar industry that includes construction, mining, and, potentially, space exploration. Increased automation levels are needed for multiple reasons: some digging operations occur in places too dangerous for humans, such as deep-sea mines, or are extremely time consuming. Increasing automation levels will require the integration of a large number of diverse sensors, sophisticated algorithms, and powerful data integration software. Presently, the application of 3D technology is widespread in different fields of the industry, including those deployed for robotic guidance, product profiling (Jamroz et al. 2006), and component wear evaluation (Valigi et al. 2021). This achievement is due to low-cost and reliable optoelectronic technology and mass-production techniques, which include semiconductor lasers, miniature optics, precision electronics, signal processing, and signal processing technologies (Gühring 2000). However, the sensors of huge investments in data collection and the amount of information and knowledge generated from data repositories are minimal (France-Mensah et al. 2017). The quality of the information also depends on how it is displayed at a construction site. Information visualisation is one of the most important steps towards improving understanding and informed decision-making.
A current research trend in automatic excavators is to add 3D measurement techniques to facilitate intelligent and independent operations. The challenge is that the construction workforce is expected to decline in the coming years in Europe. As a result, it can be difficult to find skilled drivers to work with all machines. This is particularly problematic because the quality of the smooth surface obtained by using an excavator relies on a high level of skill and precision. To increase ground-levelling work efficiency at construction sites, new visualisation technologies and interactive machine-learning processes are being developed (Jiang et al. 2019). Using a visualisation tool system, an excavator’s driver may more easily create smooth and flat finished surfaces and reduce ex-post verification, although the final finishing accuracy is largely dependent on the skill of the driver.
The relevant literature in the field of study provides many visualisation techniques to assess the surface map of soil. These methods are based on the analysis of light reflection or scattering. Recent developments in the field include RGB (red, green, and blue) cameras (Kim et al. 2018), stereo cameras (Pan and Kim 2016; Matsuura and Fujisawa 2008), light detection and ranging (LIDAR; Heide, Emter and Petereit 2018), and laser scanners (Stentz et al.1999; Oh et al. 2018). A stereo camera estimates depth information from an image using disparity estimation, whilst a LIDAR sensor generates depth cues directly from the environment (John et al. 2017). The camera and stereo camera techniques provide a lot of information about objects at a high speed and low cost but can be sensitive to excessively low or high illumination; they are unreliable in bad lighting conditions (Fremont et al. 2016). The LIDAR 3D scanner has become popular due to its object scanning applications for long-range and depth measurements (Fang et al. 2018). Most are currently based on moving scanning mirrors, which makes it difficult to integrate directly into work machines. However, it is not yet typical to use real-time visualisation from the work machines that are in use on construction sites at different project phases.
Our previous work studied the measurement of a soil surface profile using an excavator (Niskanen et al. 2020). The experimental results were encouraging and demonstrated the profilometer’s good ground surface visualisation performance. We investigated the application of a combination of commercial LIDAR and profilometer scanning methods to generate a 4D model of a large-scale construction site using an excavator. Our fusion method and approach combined the advantages of the profilometer, which provides precise position estimations of near objects, and LIDAR, which offers broad field-of-view position estimations at far distances (Immonen et al. 2021). The profilometer was used to estimate the volume of soil stockpiles and the thickness of road layers using an excavator based on the acquired multi-4D point cloud information (Niskanen et al. 2022). This method facilitated the documentation and visualisation of soil layer quality and soil stockpile volumes. In addition, information intensity can be utilised to roughly identify sand and gravel materials and to recognise stockpiles using reflective surface markings. Furthermore, the 2D profilometer has been used in visualisation to capture the 4D point cloud information of vehicles at urban speeds (Niskanen et al. 2021). This model facilitates road traffic flow monitoring in real-time and compared to traffic forecasting models; the model helps prevent congestion by optimising vehicle speeds. The goal of this investigation is to develop 3D visualisation techniques to minimise the use of excessive and unnecessary road materials, which can reduce transportation and material costs, as well as fuel consumption and emissions.
The purpose of this research is to study the difference in work quality between a semiautonomous excavator’s trench and a driver’s trench with a commercial 3D scanner and a 2D profilometer integrated into the excavator. This method achieves a visualisation map for a construction site at lower costs and with greater speed than so-called traditional techniques, such as Unmanned Aerial Vehicle [UAV]-based 3D scanning or photography. The implementation of an integrated system is also simpler than that of traditional solutions. The high quality of the 3D map was improved using triangle meshes based on the Mesh3D algorithm (Sitnik and Karaszewski 2008). The procedure also utilises an algorithm to reduce the effect of noise in triangle meshes by removing inconsistent points (Ng and Wong 2007). The 2D profilometer integrated into the excavator can be used for real-time monitoring of digging progress based on the designed machine control model. This helps also to reduce fuel and emissions in transporting. This work creates a foundation to achieve commercial applications for excavator-integrated measuring devices and 3D visualisation maps.

2 Theory

Typically, a static measurement system is located on the side of a work machine. A boom attachment provides data and viewpoints that can be used in real-time by, for example, providing unique viewpoints for visualisation at construction sites. This machine utilises the natural movement of the excavator boom above the object to produce a 3D image using a solid-state 2D profilometer. The profilometer’s local coordinates and orientation data (timestamped) are requested by the excavator on a Personal Computer [PC]. The profilometer data are then saved in a plain text (.txt) file for the excavator, and the collected profilometer distance, intensity, and timestamp data, along with the excavator profilometer data, are processed through a transformation matrix on MATLAB. The transformation matrix was implemented using quaternion algebra.
Figure 1 shows the coordinate transformation frames of our profilometer system. The position of vector P with respect to the world coordinate system can be expressed in a standard form as follows:
$$P = \left[ {\begin{array}{*{20}c} x \\ y \\ z \\ \end{array} } \right],$$
(1)
where x, y, and z are the individual components of the vector (Koivu 1989). The transformation matrix consists of a series of rotations (aligning the x, y, and z axes) and translations (moving the origin). The position of the vector can be defined by the following transformation matrix:
$$P_{ = } T_{1} \times T_{2} \times P_{m} = \left[ {\begin{array}{*{20}c} {R_{{{\text{ORI}}}} } & {{}_{{}}^{A} P} \\ {000} & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {R_{{{\text{Profilometer}}}} } & 0 \\ {000} & 1 \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 0 \\ {\begin{array}{*{20}c} 0 \\ {\begin{array}{*{20}c} {{\text{me}}} \\ 0 \\ \end{array} } \\ \end{array} } \\ \end{array} } \right],$$
(2)
where T1 and T2 are the transformation matrices. Vector Pm is the measurement result of the scanner, and vector AP presents the scanner position in the workspace coordinate system. RORI presents the orientation of the scanners, which is obtained through a socket network from the excavator in a quaternion form (q0, q1, q2, q3), where q0 is the quaternion w component, q1 is the quaternion x component, q2 is the quaternion y component, and q3 is the quaternion z component. The transformation matrix (T1) can be expressed as follows:
$$T_{{1}} = \left[ {\begin{array}{*{20}c} {q_{0}^{2} + q_{1}^{2} - q_{2}^{2} - q_{3}^{2} } & {2\left( {q_{1} q_{2} - q_{0} q_{3} } \right)} & {2\left( {q_{1} q_{3} + q_{0} q_{2} } \right)} & {p_{x} } \\ {2\left( {q_{1} q_{2} + q_{0} q_{3} } \right)} & {q_{0}^{2} - q_{1}^{2} + q_{2}^{2} - q_{3}^{2} } & {2\left( {q_{2} q_{3} - q_{0} q_{1} } \right)} & {p_{y} } \\ {2\left( {q_{1} q_{3} - q_{0} q_{2} } \right)} & {2\left( {q_{2} q_{3} + q_{0} q_{1} } \right)} & {q_{0}^{2} - q_{1}^{2} - q_{2}^{2} + q_{3}^{2} } & {p_{z} } \\ 0 & 0 & 0 & 1 \\ \end{array} } \right],$$
(3)
where xp, py and pz are the positions of the scanner in the workspace coordinate system in the excavator AP (Andrew 2006). The transformation matrix (T2) can be expressed as follows:
$$T_{2} \left( {\varphi_{{{\text{offset}}}} ,\rho } \right) = \left[ {\begin{array}{*{20}c} { - \cos \left( \rho \right)\sin \left( {\varphi_{{{\text{offset}}}} } \right)} & {\sin \left( \rho \right)} & {\cos \left( \rho \right)\cos \left( {\varphi_{{{\text{offset}}}} } \right)} & 0 \\ {\sin \left( \rho \right)\sin \left( {\varphi_{{{\text{offset}}}} } \right)} & {\cos \left( \rho \right)} & { - \sin \left( \rho \right)\cos \left( {\varphi_{{{\text{offset}}}} } \right)} & 0 \\ { - \cos \left( {\varphi_{{{\text{offset}}}} } \right)} & 0 & { - \sin \left( {\varphi_{{{\text{offset}}}} } \right)} & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} } \right],$$
(4)
where \(\varphi\) offset is the angle between the scanner and the excavator’s boom. More information on the calculation routine can be found in Niskanen et al. (2020).

3 Materials and methods

3.1 2D pulsed time-of-flight (TOF) profilometer

The solid-state, line-profiling 2D laser radar (Keränen and Kostamovaara 2019a, b) used in this work performed pulsed laser TOF distance measurements at a rate of 20–30 frames/s in 256 directions simultaneously within a field of view of ~ 38 degrees. The device illuminated a strip-like area in front of the excavator with a pulsed laser diode beam spread with simple spherical optics. The back-and-forth flight times of the laser pulse in 256 individual directions were measured with an array of Complementary Metal–Oxide–Semiconductor [CMOS] single-photon avalanche diodes and time-to-digital converters. The distance in each direction was then calculated based on the known speed of light.
The maximum measurement range was 5–10 m with a frame rate of 30 frames/s in outdoor conditions (Keränen and Kostamovaara 2019b), and the distance measurement accuracy and precision were better than 1 cm, which are comparatively good values achieved with a device with no moving parts operating in sunlight or dark. Figure 2 shows a photograph of the 2D pulsed TOF profilometer. The device is explained in more detail in (Keränen and Kostamovaara 2019b).

3.2 Commercial 3D laser scanner

The 3D object capture was used as a reference device for a type Z + F IMAGER 5016 laser scanner, as shown in Fig. 3. The scanner uses the phase shift technique to achieve distance measurement. The measuring range was 0.3–365 m, with a 360° × 320° field of view. With a distance of less than 10 m, an accuracy of less than 1 mm can be achieved. The maximum scan speed is up to 1.1 million dots per second, which results in densely recorded point clouds from an object. The laser beam diameter was 3.5 mm, and the beam divergence was 0.3 mrad. The scanner captures colour through the use of a full HDR panorama 80 MPixel camera (Zoller + Fröhlich GmbH 2020).

3.3 An excavator with a control system

This study used the Bobcat E85 commercial excavator (8.5 tonnes), which was modified for automation purposes. Figure 4 shows a view of the system used for this test. The boom, arm, and bucket were controlled via Novatron’s control system. The hydraulic machine valve system was retrofitted for precise electrical control. The excavator was outfitted with electrohydraulic controls, a suite of sensors, and an onboard computer. The basis of the sensor system was Novatron’s newly developed Inertial Measurement Unit (IMU) G2 sensors, which can work up to a frequency of 200 Hz. These sensors provide accurate and precise excavator positioning that is sufficient for automatic movements. Additionally, Novatron’s G2-type sensors have a Controller Area Network [CAN] bus interface. The use of the CAN bus decreases the need for wiring, and all sensors can be set on the same bus. A laser profilometer with a plastic case was attached to the excavator of the boom, and all information was collected using an on-board computer. The profilometer location and orientation information were recorded for the control system of the excavator by a total station (Leica TS12, Niskanen et al. 2020).

4 Experiments and results

Field tests were carried out on an earthmoving site at Haukipudas’s vocational college unit. First, the excavator driver dug a trench into the sandy soil. The dimensions of the trench were about 50 cm deep, 150 cm wide, and 200 cm long, as shown in Fig. 5a. The semiautonomous excavator was first implemented by recording the feedback script parameters for the work performance of the excavator’s human driver. After that, the recorded commands were downloaded to the excavator computer, which permitted the excavator to operate independently. The results of the trench are presented in Fig. 5b.
The trench comparison visualisation was carried out using a type Z + F IMAGER 5016 commercial laser scanner; its accuracy was 1 mm for distances of less than 10 m. The measurement took approximately 280 s, and the number of points in the 3D cloud was approximately 620,000. The 3D point clouds of measurements were imported into Trimble RealWorks software. Unnecessary points (noises) were eliminated from the 3D point clouds and reconstructed surface triangular mesh representations. In the final step, a triangle mesh was generated from the remaining points by applying the Mesh3D algorithm (Sitnik and Karaszewski 2008). The implementation of Mesh3D algorithm enables us to improve the smoothness and continuity of the mesh, which allows us to obtain more realistic and visually better reconstructions of soil surface. The results of the comparison of the trenches are shown in Fig. 6. In Fig. 6a, relatively small deformation differences are detectable. These images show larger deviations at the edges of the trench, where no perfect match was achieved. Finally, the accuracy of the human driver and the semiautonomous excavator was evaluated by taking a cross section of the trenches. The cross sections of the human driver and semiautonomous excavator are plotted in Fig. 6b, which demonstrates the similarity of both trenches. The maximum difference in the cross sections of the layers (7 cm) was observed at the edge of the trench. Elsewhere, the differences in the cross sections of the layers were about 3 cm. Measurement differences in trenches may be caused by many factors, including the material of the surface and the invisibility of the soil. Poor reflective surfaces, such as dark soil, are difficult to detect due to their high light absorption features.
The profilometer was fixed to the excavator boom with a plastic case. The configuration settings of the profilometer and the calculation were implemented using C +  + . In this case, the profilometer was configured to send data at a frequency of 25 frames/s and an angle range of ± 20°. The profilometer provided the distance values (hypotenuses) for each laser point (256) obtained from the scan. By combining these values with the angular data of the profilometer, the X and Y coordinates of each point were calculated using trigonometric formulas. The Z coordinates were determined using the excavator’s IMU sensors and Global Positioning System [GPS]. The excavator (Z coordinate) and profilometer (X and Y coordinates) data were combined into a 3D point cloud based on timestamps using MATLAB software.
The experiment was executed by moving the arm position automatically from − 145° to − 100° above the trench. The number of points in the 3D cloud was approximately 167,000 when the total measurement time reached 26 s. MATLAB/Simulink software was used to gather all sensor data from the excavator boom system and to automatically create the control system of the excavator. The 3D point cloud was transformed into a common coordinate system for the convenience of comparing the results using Eqs. (2)–(4). The experimental results are presented in Fig. 7.
Figure 7a presents the 3D point cloud of the trench computed by the semi-excavator’s measurement system. Unnecessary points (noises) were eliminated in both the 3D point cloud and the reconstructed surface triangular mesh representation. The good quality of the triangle meshes led to better soil surface visualisation. Finally, a triangle mesh was created from the 3D point cloud using the Mesh3D algorithm (see Fig. 7b).
The 3D point cloud of the profilometer was compared with the result of the IMAGER 5016 commercial laser scanner. As demonstrated in Fig. 8, the 3D scanner produced a higher resolution image than the profilometer (Fig. 7b). The 3D laser scanner is technically robust and integrates directly into the excavator, and it is too large and expensive for regular use in an excavator’s current application.
The resemblance of two different measurement techniques from a trench was compared by setting images on top of each other using Trimble RealWorks. The results of the comparison are shown in Fig. 9, which reveals relatively small detectable deformation differences. These images show larger deviations at the edges of the trench, where no perfect match was achieved. The model produced by the 3D scanner was of a higher quality than the model produced by the 2D profilometer. The excavator GPS and IMU sensors, along with the small vibration caused by the boom, are important factors that impact the profilometer’s 3D imaging quality. Furthermore, large stones or hummocks in the soil prevent the profilometer from measuring the area behind them. Although the accuracy of the profilometer method is not quite optimal, it has been shown to save time and labour on construction sites.
Additionally, these results indicate that the automation of earthmoving work is technically achievable in easily dug soil. The main characteristics of the two techniques are listed in Table 1.
Table 1
Main characteristics of 2D profilometer and 3D scanner techniques *(Keränen and Kostamovaara 2019b)
 
3D scanner
2D profilometer*
Device characteristics
Cost
High
Low
Size
Medium
Small
Moving parts
Yes
No
Accuracy
1 mm distance of less 10 m
2 mm
Frame rate
50 Hz
28 Hz
Measurement range
0.3–365
1–35 m
Field of view
360° × 320°
37° × 0.3°
Measurement characteristics (this work)
Number of points
620 000
167 000
Measurement time
280 s
26 s
Image quality
Excellent
Good

5 Conclusion

Our objective was to develop a visualisation method that could monitor digging progress at the construction site based on machine control model in real-time by using a profilometer integrated into an excavator. The results obtained from this study show that a 3D point cloud of 2D profilometer connected to triangle mesh analysis provides accurate and computationally efficient visualisation results for finishing surface quality monitoring using a semiautonomous excavator. The visualisation technique enables the identification of excessive and unnecessary road materials at a construction site, which might lead to greater efficiency, easier access to smooth and flat finished surfaces, and less need for post-inspections. The obtained 3D triangular surface network results provide new guidelines and directions for future research in the field of information visualisation for construction sites and allow for greater excavator autonomy.

Acknowledgements

We gratefully acknowledge funding from Business Finland (38056/31/2020).
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Supplementary Information

Literatur
Zurück zum Zitat Andrew H (2006) Visualizing quaternions. Elsevier, San Francisco Andrew H (2006) Visualizing quaternions. Elsevier, San Francisco
Zurück zum Zitat Heide N, Emter T, Petereit J (2018) Calibration of multiple 3D LiDAR sensors to a common vehicle frame. In: 50th international symposium on robotics, München, pp 1–8 Heide N, Emter T, Petereit J (2018) Calibration of multiple 3D LiDAR sensors to a common vehicle frame. In: 50th international symposium on robotics, München, pp 1–8
Zurück zum Zitat Jamroz WR, Kruzelecky R, Haddad EI (2006) Applied microphotonics. CRC Press Taylor & Francis Group, Boca RatonCrossRef Jamroz WR, Kruzelecky R, Haddad EI (2006) Applied microphotonics. CRC Press Taylor & Francis Group, Boca RatonCrossRef
Zurück zum Zitat Koivu AJ (1989) Fundamentals for control of robotic manipulators. Johon Wiley & Sons, New York Koivu AJ (1989) Fundamentals for control of robotic manipulators. Johon Wiley & Sons, New York
Zurück zum Zitat Niskanen I, Immonen M, Hallman L, Mikkonen M, Hokkanen V, Hashimoto T, Kostamovaara JT, Heikkilä R (2022) Using a 2D-profilometer to determine volume and thickness of stockpiles and ground layers of roads. J Transp Eng Part B Pavements. https://doi.org/10.1061/JPEODXCrossRef Niskanen I, Immonen M, Hallman L, Mikkonen M, Hokkanen V, Hashimoto T, Kostamovaara JT, Heikkilä R (2022) Using a 2D-profilometer to determine volume and thickness of stockpiles and ground layers of roads. J Transp Eng Part B Pavements. https://​doi.​org/​10.​1061/​JPEODXCrossRef
Zurück zum Zitat Sitnik R, Karaszewski M (2008) Optimized point cloud triangulation for 3D scanning systems. Mach Graph vis 17:349–371 Sitnik R, Karaszewski M (2008) Optimized point cloud triangulation for 3D scanning systems. Mach Graph vis 17:349–371
Metadaten
Titel
Trench visualisation from a semiautonomous excavator with a base grid map using a TOF 2D profilometer
verfasst von
Ilpo Niskanen
Matti Immonen
Tomi Makkonen
Lauri Hallman
Martti Mikkonen
Pekka Keränen
Juha Kostamovaara
Rauno Heikkilä
Publikationsdatum
06.02.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Journal of Visualization / Ausgabe 4/2023
Print ISSN: 1343-8875
Elektronische ISSN: 1875-8975
DOI
https://doi.org/10.1007/s12650-023-00908-4

Weitere Artikel der Ausgabe 4/2023

Journal of Visualization 4/2023 Zur Ausgabe

Premium Partner