Skip to main content
Erschienen in: Earth Science Informatics 2/2024

Open Access 02.02.2024 | RESEARCH

Positioning and detection of rigid pavement cracks using GNSS data and image processing

verfasst von: Ahmed A. Nasrallah, Mohamed A. Abdelfatah, Mohamed I. E. Attia, Gamal S. El-Fiky

Erschienen in: Earth Science Informatics | Ausgabe 2/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Modern pavement management systems depend mainly on pavement condition assessment to plan rehabilitation strategies. Manual inspection is performed by trained inspectors to assess pavement damages conventionally. This can be cost-intensive, time-consuming, and a source of risk for inspectors. An image-based inspection using a smartphone is adopted to overcome such problems. This paper proposes an automatic crack detection and mapping program for rigid pavement, which can automate the visual inspection process. The program uses Global Navigation Satellite System (GNSS) data recorded by smartphones and various image processing techniques to detect crack lengths and areas in images. The performance of the program was evaluated by a field study. A crack quantification process was performed to compare the manually measured values and crack lengths obtained from the program. The results show that the program can detect other types of distress, such as pop-outs and punch-outs. This method can achieve satisfactory performance compared to the effort and costs spent.
Hinweise
Communicated by: H. Babaie

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

The utilization of rigid pavements has become more needed nowadays as they are widely used for airfield runways, parking lots, and bridge decks (Allujami et al. 2021). Rigid pavements are exposed to severe environmental factors and aggressive usage, which could affect their functional conditions and may cause significant damage (Allujami et al. 2021, Ai et al. 2023). The deterioration of the pavement affects riding quality and can cause accidents (Steckenrider 2017). Periodic inspection of the pavement is needed in order to determine the optimal time for making maintenance and rehabilitation treatments. The primary purpose of the maintenance and rehabilitation treatments is to restore the pavement's ideal characteristics and extend pavement service time (Torres-Machi et al. 2017). Manual inspection methods involve experts who visually trace the distress of the pavement on-site using specific tools (Munawar et al. 2021). This type of inspection can be labor intensive, time-consuming, and may be affected by human error (Dorafshan et al. 2018; Munawar et al. 2021; Ai et al. 2023). Automating inspection methods can avoid these defects and enhance inspection speed, reliability, and accuracy. Visual sensing technologies such as digital cameras, laser, thermal, and radiographic tests were used to automate the process of distress inspection (Rabah et al. 2013, Santos et al. 2019, Ghosh 2022). Image-based inspection methods depend on capturing images first by handheld devices such as smartphones (Shi et al. 2016, Kalfarisi et al. 2020), vehicle-mounted cameras (Cubero-Fernandez et al. 2017, Santos et al. 2019) or unmanned aerial vehicle (UAV) (Ersoz et al. 2017, Avendaño 2020). Images are then processed and reviewed by inspectors to obtain information about damages.
The development of new computer vision techniques creates new inspection methods that use detection algorithms and digital images to study and detect damages in an automated way (Avendaño 2020). These algorithms use various image processing techniques to process cracked images. (Iyer and Sinha 2005) They used a three-step approach, which uses morphological operations to enhance crack visualization with respect to their spatial locations. Also, edge detection algorithms (Cubero-Fernandez et al. 2017; Dorafshan et al. 2018) and wavelet transform (Subirats et al. 2006, Ouma and Hahn 2016) were used to detect cracks boundaries. (Abdel-Qader et al. 2003) They compare different crack detection algorithms used for detecting cracks that occur on bridge surfaces. They used Sobel, Canny, Fast Fourier transform, and Haar edge detection methods. Threshold-based methods assume that crack pixels are darker than the surrounding pixels and have lower intensities than other pixels (Ai et al. 2023). (Zhang et al. 2021) Proposed a method based on a convolutional neural network that detects and extracts pixels pertaining to fracture from well-logging images.
The aim of the research is to develop a software program that automates the visual inspection process of rigid pavement. The software can detect the cracks that appear on the surface of rigid pavement, determine the global coordinates of cracks, and compute the lengths of cracks using images of the pavement.

Methodology

The proposed software was coded using Python programming language. The proposed image processing algorithm is based on edge detection and morphological operations. The extracted information from image processing and the spatial information recorded by Google Maps service for each image are used to transform the pixel coordinates of cracks to global coordinates in order to compute the lengths and global coordinates of cracks.

Images acquisition

The images were captured using a hand-held camera, such that the camera was totally horizontal. The vertical distance between the camera and the pavement surface was 1.5 m, such that the area covered by the image is equal to 1.91 \(\times\) 1.43 m. A holder was used to ensure the correct height and that the camera is exactly horizontal. The dimensions of the image are 4608 \(\times\) 3456 pixels. The inspector used a compass to define the north direction, such that he directed himself towards the north while image capturing in order to neglect the axes rotation effect during coordinates transformation. The smartphone used is Vivo V23e.
Aiding points are the points that have known global and pixel coordinates. There is one aiding point per image, and it’s assumed to be the center of the image. he coordinates are obtained using Google map service as latitude and longitude (φ, λ) where φ represents the latitude coordinate and λ represents the longitudinal coordinate. The measurements of these coordinates are angles (Maling 2013). In general, the distance between two points on the surface of the earth can only be determined on a plane coordinate system. Therefore, the coordinates obtained must be transformed to Universal Transverse Mercator (UTM) coordinates in order to facilitate the calculations of crack lengths using the transformed coordinates. The proposed methodology QGIS (QGIS, https://​qgis.​org/​en/​site/​) program to perform the conversion between the two coordinate systems.

Crack detection process

Edge detectors are algorithms that can be used to detect cracks (Dorafshan et al. 2018). Edges or cracks are defined as sharp intensities transitions and can also be defined as the locations in the image at which pixel intensities vary sharply (Abdel-Qader et al. 2003; Hoang and Nguyen 2018). Edge detecting depends mainly on the convolution technique. The convolution of a filter \(w(s,t)\) of size \(m\) × \(n\) with an image \(f\left(x,y\right),\) such that m = 2a + 1 and n = 2b + 1 is defined by Eq. 1 (Gonzalez and Woods 2007):
$$(w\star f)\left(x,y\right)=\sum_{s=-a}^{a}\sum_{t=-b}^{b}w\left(s,t\right)f(x-s,y-t)$$
(1)
In this research, the proposed image processing algorithm has three steps: (1) edge enhancement, (2) edge detection, and (3) edge localization. Images go first through a pre-processing procedure in order to enhance edges and remove noises. Noises such as blebs, stains, and non-uniform light distribution during image acquisition can mislead the crack detection process (Rabah et al. 2013). Edge detection is done using a Canny edge detector. The goal of edge localization is to draw bounding boxes around cracks and determine their locations. The proposed algorithm (Fig. 1) uses the library of image processing algorithms OpenCV (https://​opencv.​org/​).
Red/blue/green images are represented mathematically by three matrices. The cvCvtColor () function is used to convert images into grayscale images. The previous function uses Eq. 2 to compute the grayscale image’s only channel (Y), in which Red (R), Green (G), and Blue (B) are the values of R, G, and B channels, respectively (Hagara et al. 2020). The values (R, G, and B) channels vary from 0 to 255.
$$Y = 0.299 R+0.587 G+0.114 B$$
(2)
Crack pixels have lower intensities or are darker than the background (Ai et al. 2023). A minimum filter is used to clarify and distinguish cracks from the background. It replaces the filter center with the minimum value within the filter pixels in an image (Gonzalez and Woods 2007). This could be done by applying cv2.erode () function to the grayscale image.
The different types of noises can make the process of crack detection inaccurate. The main challenge in the noise removal process is keeping the main features of the image, such as the edges (Hasanzadeh and Daneshvar 2015; Muhammad et al. 2018). Gaussian blur filter is applied to the images using the cvGaussianBlur () function. Gaussian blur filter is defined by Eq. 3, where \(\sigma\) is the standard deviation.
$$w\left(s,t\right)=\frac{1}{2\pi {\sigma }^{2}}{e}^{\frac{-{(s}^{2}+{t}^{2})}{2{\sigma }^{2}}}$$
(3)
The canny edge detector is applied to images after the noise removal process using the function cv2.Canny ().The output of this function is a binary image. The algorithm of the Canny edge detector is a multi-stage process that starts first with noise removal using a Gaussian filter. Then, finding intensity gradient in horizontal (\({{\text{G}}}_{{\text{x}}})\) and vertical direction (\({{\text{G}}}_{{\text{y}}})\) by convolving two filters with images along X and Y directions (\({k}_{x}\) and \({k}_{y}\)) (Eq. 4) (Abdel-Qader et al. 2003; Bhardwaj and Mittal 2012). Where the intensity gradient is maximum, there is a possible edge. The edge gradient and direction for each pixel can be estimated using Eq. 5 and Eq. 6 (Bhardwaj and Mittal 2012). The gradient direction is always perpendicular to the edges.
$$k_x=\begin{bmatrix}-1&0&1\\-2&0&2\\-1&0&1\end{bmatrix}k_y=\begin{bmatrix}1&2&1\\0&0&1\\-1&-2&-1\end{bmatrix}$$
(4)
$$Edge\;Gradient\;=\sqrt{{G}_{x}^{2}+{G}_{y}^{2}}$$
(5)
$${\theta (\mathit{Edge} \mathit{Direction}) = \mathit{tan}}^{-1}(\frac{{G}_{y}}{{G}_{x}})$$
(6)
Then, edges are thinned by checking if the pixel is a local maximum in the direction of the gradient. This process is accomplished by checking if the edge pixel intensity is bigger than its neighbors across the gradient direction. If so, it is considered to be at the next stage. The next stage of the Canny algorithm is Hysteresis Thresholding. It is done by setting two values for edge gradient: high threshold and low threshold (Medina-Carnicer et al. 2011). If the edge gradient of the pixel is above the high threshold, the pixel is an edge, and if it is below the low threshold, the pixel isn’t considered an edge. If the pixel’s edge gradient is located between two previous limits and connected to a pixel with a high threshold, so it is considered an edge. Otherwise, it is discarded.
Mathematical morphology is a framework for image processing based on lattice theory and random geometry (Gonzalez and Woods 2007). Mathematical morphology has two main operations: dilation and erosion. Closing is defined as the process of applying dilation first to the image and then applying erosion using the same structure element for both operations (Haralick et al. 1987). A closing process is employed on the detected edges in order to fill micro gaps between cracks. As a result, the main cracks show up as continuous and more detailed cracks (Haralick et al. 1987, Gonzalez and Woods 2007). Applying dilation to a grayscale image that contains the object (A) by using structure element (B) is defined by Eq. 7 (Gonzalez and Woods 2007). This equation is based on reflecting (B) about its origin and translating the reflection by (z).
$$A\oplus B=\left\{Z |(\widehat{B}{)}_{z }\cap A\ne \varnothing \right\}$$
(7)
Applying erosion to a grayscale image that contains object (A) by using structure element (B) is defined by Eq.8 (Gonzalez and Woods 2007).
$$A\ominus B=\left\{Z |(B{)}_{Z }\subseteq A\right\}$$
(8)
Function cvFindContours () is applied to images after the closing process. The function retrieves contours from images using the algorithm illustrated in (Suzuki and be 1985). Contours can be explained simply as a curve that has all joining continuous points that have the same color or intensity. Contours are considered the detected edges from the previous stages. Applying the previous function returns a list of contours. Contours could have remaining noises. To ignore such noises, minimum arc length should be defined for contours. CvArclenght () is a function used to define the minimum arc length, which defines the contours as an edge. CvminAreaRect () is a function that is used to set the bounding rectangles around detected cracks with minimum rotated areas. This function takes a list of contours and returns the coordinates of the center of the bounding rectangle, the width, the height, and the rotation angle of that rectangle. CvboxPoint () takes the output of CvminAreaRect () and returns the coordinates of the four corners of the bounding rectangle. The four coordinates of the rectangle corners will be used later to compute the crack length. The button (Test image) on the graphical user interface (GUI) has a command with a user-defined function (Van Rossum and Drake Jr 1995). That function has all the mentioned functions that form the proposed algorithm for image processing.

Reading coordinates

The proposed program uses global coordinates of aiding points and pixel coordinates of the crack bounding boxes to compute crack lengths. It reads aiding points coordinates from input Excel sheet using the library Openpyxl (Hunt 2019). The arrangement of the positioning data of the aiding point at the input Excel sheet will be the same arrangement of the images that the program will process. This arrangement will facilitate the process of the coordinate transformation.
Function Openpyxl.load_workbook () is used to load the input workbook (WB), and then the default worksheet of the workbook is activated using WB.active. The aiding points coordinated are read from each cell of the worksheet using function cell().value. The function takes the values of the row and column of each targeted cell. The global coordinates of aiding points are inserted in two empty sets (\({{\text{X}}}_{{\text{g}}}\) and \({{\text{Y}}}_{{\text{g}}}\)) using function append (). The pixel coordinates of aiding points are inserted in (\({{\text{X}}}_{{\text{p}}}\) and \({{\text{Y}}}_{{\text{p}}}\)) sets. Also, coordinates of rectangles that define each crack are inserted in two sets (\({{\text{X}}}_{{\text{pb}}}\) and \({{\text{Y}}}_{{\text{pb}}}\)) in order to facilitate the coordinate transform calculation.

Processing parameters

Table 1 presents an overview of the main parameters of the proposed algorithm. These parameters were selected on a trial-and-error basis. Different values of the parameters were empirically tried first in order to decide the best values for emphasizing the crack detection process. It is important to consider that images have different brightness due to the quality of the concrete surface or the lightning, which affects the values of these parameters (Avendaño 2020). As a result, these parameters could be effective for a set of images and not effective for another set.
Table 1
Algorithm Parameters
Minimum Filter
Kernel
5 \(\times\) 5
Gaussian Filter
Kernel
11 \(\times\) 11
Sigma X
0
Sigma Y
0
Canny Detector
Threshold 1
30
Threshold 2
90
Closing filter
Kernel
3 \(\times\) 3
Iterations
5
cvDrawContours()
Minimum arc length of contours
350

Transformation of cracks pixels

Image coordinates system is considered a plane coordinate system(Gonzalez and Woods 2007; Rabah et al. 2013). The origin of the images, according to OpenCV, is the center of the upper left image pixel (Fig. 2). In order to transform pixel coordinates to global coordinates, it’s important to determine four factors (Ruffhead 2021): The scale of the image, The rotation angle of the image axes, and Two translation vectors of the image origin (Petrakis et al. 2023). The first image to take during the inspection is an image of a ruler. his image is used to compute scale (K) by dividing the ruler length in meters by the corresponding length of the ruler in pixels (Eq. 9). The two length values are inserted into the program through two labels on the GUI. The scale of images equals \(0.00166 m/pixel\). Pressing Button (Compute scale), computes the scale of images and inserts the value at (K) set.
$$K=\frac{{L}_{actual}}{{L}_{pixel}}$$
(9)
The angle of the rotation between the pixel axes and global axes equals zero as the north direction is applied to the Y-axis of the image. Shifting values at X and Y directions are defined as (Eq. 10 and Eq. 11):
$$C={X}_{g}-K{X}_{p}$$
(10)
$$D={Y}_{g}+K{Y}_{p}$$
(11)
where \({X}_{g}\) and \({Y}_{g}\) are the global coordinates of the aiding points, \({X}_{p}\) and \({Y}_{p}\) are the pixel coordinates of the aiding point, \(K\) is the scale of the image, \(C\) is the shifting value at x-direction, and \(D\) is the shifting value at y-direction.
The coordinates of the four corners of bounding boxes are transformed to global coordinates using Eq. 12 and Eq. 13. The first point is the top left corner of the box, the second point is the top right corner, the third is the bottom right corner, and the fourth is the bottom left corner. Cracks are assumed to be the diagonal of the bounding boxes (Eq. 14). The two sides of bounding box are computed using the same equation. The area affected by the crack (A) is assumed to be the bounding box’s area (Eq. 15).
$${X}_{cg}=K{X}_{cp}+C$$
(12)
$${Y}_{cg}=-K{Y}_{cp}+D$$
(13)
$$L=\sqrt{{(X}_{cg1}-{X}_{cg2}{)}^{2}+{(y}_{cg1}-{y}_{cg2}{)}^{2}}$$
(14)
$$A={L}_{1}*{L}_{2}$$
(15)
where \({X}_{cg}\) and \({Y}_{cg}\) are the global coordinates of the bounding boxes, \({X}_{cp}\) and \({Y}_{cp}\) are the pixel coordinates of the bounding boxes, and \({L}_{1}\),\({L}_{2}\) are the lengths of the two sides of the bounding box.
All the outputs of the previous equations are inserted in an Excel sheet using the function cell ().value for every image. The bottom (Do Math) has a command with a user-defined function that applies all the mentioned equations in order to transform coordinates.

Experiment and results

The methodology was applied at weighting scale number 5 exit’s yard, which is located at EL-Dekheila Port, Alexandria, Egypt. The yard has an area of 1741 \({{\text{m}}}^{2}\) (Fig. 3). The majority of slabs have dimensions of 3.5 \(\times\) 3.5 m. Each slab was covered by three images.
The experimental tests have been carried out with 339 images. All images were resized to 25% of their original size in order to facilitate image processing. All images have a resolution of 1152 \(\times\) 864 pixels after resizing. The spatial information of the aiding points was recorded using the same mobile phone during the image acquisition process.

Cracks detection

The images went through the proposed crack detection methodology with the aforementioned parameters. Figure 4 shows an example of the crack detection process for one cracked image. In this figure, image (a) is the gray-scaled image; image (b) shows the result of applying the minimum filter on image (a); image (c) shows the result of applying the Gaussian blur filter on image (b); image (d) is the result of applying edge detector on image (c); image (e) is the result of applying the morphological closing process to image (d) and finally image (f) is the final result after using function cvDrawContours() on image (e). The program can detect various distresses like cracks, pop-outs, and punch-outs (Fig. 5).
Confusion matrices have been used as a source of data for many evaluation metrics for crack detection algorithms (Table 2) (Ai et al. 2023). The matrix represents the number of crack detections that were correctly classified and incorrectly classified (Avendaño 2020). It contains two columns and two rows. Each column represents the predicted classes, and each row represents the actual classes. The upper left corner represents the number of detections that contain cracks and are successfully labeled as cracks, which are reported as True Positives (TP). The upper right corner shows the number of program detections that contain cracks and are misclassified as uncracked, which are reported as False Negatives (FN). The bottom left corner represents the number of program detections that were actually mislabeled as cracks, which are represented as False Positives (FP). The bottom right corner shows the number of detections that are correctly classified as uncracked image Negatives (TN). Figure 6 shows the confusion matrix for the result obtained for the analysis of detections for the images.
Table 2
Typical Confusion matrix of a crack detection algorithm (Ai et al. 2023)
Total = P + N
Predicted crack (PP)
Predicted non-crack (PN)
Actual crack (P)
T P
F N
Actual non-crack (N)
F P
T N
The data extracted from the confusion matrix was used to assess the performance of the proposed algorithm. The performance of the algorithm was evaluated using the standard performance metrics of accuracy, precision, and recall. Equations 1618 illustrate how accuracy, precision, and recall are evaluated (Ai et al. 2023) (Ghosh 2022).
$$Accuracy= \frac{TP+TN}{TP+FP+TN+FN}$$
(16)
$$Recall=\frac{TP}{TP+FN}$$
(17)
$$Precision =\frac{TP}{TP+FP}$$
(18)
The precision, recall, F-score, and accuracy obtained from the proposed algorithm are shown in Table 3.
Table 3
Proposed algorithm result
 
Precision (%)
Recall (%)
Accuracy (%)
Proposed algorithm
57.00
98.81
65.22
According to Table 4, the recall metric shows that the proposed algorithm can barely misclassify cracks regions as uncracked. On the other hand, the precision value indicates that the algorithm misclassified a lot of uncracked regions as cracked. The algorithm detects stains and concrete joints as cracked regions because these objects have the same crack intensity.
Table 4
Length results for images without crack-like objects
Picture Number
Actual Crack Length (m)
Digital Crack Length (m)
Error (%)
2
1.32
0.96
-27.27
8
1.14
1.29
13.16
9
2.10
2.68
27.62
13
2.21
2.38
7.69
16
1.14
1.51
32.46
17
1.20
1.51
25.83
25
0.80
0.96
20.00
34
1.00
1.05
5.00
35
1.2
1.29
7.50

Coordinate transformation

Figure 7 shows the output of the program using the proposed methodology. For every image, the lengths and areas of cracks were measured manually on-site in order to perform a quantitative comparison with the lengths and areas of the cracks obtained from the proposed program. The error percentage was determined using Eq. 19. Tables 4 and 5 show the results of crack lengths and areas for images without crack-like objects, respectively. Tables 6 and 7 show crack lengths and area measures for images with crack-like objects, respectively.
$$\text{Error} = \frac{\text{Digital Crack Length or area}-\text{Actual Crack Length or area}}{\text{Actual Crack Length or area}}*100$$
(19)
Table 5
Area results for images without crack-like objects
Picture Number
Actual Crack Area (\({{\text{m}}}^{2}\))
Digital Crack Area (\({{\text{m}}}^{2}\))
Error (%)
4
0.30
0.26
-13.33
12
0.33
0.38
15.15
26
0.60
0.62
3.33
105
0.40
0.48
20.00
Table 6
Length results for images with crack-like objects
Picture Number
Actual Crack Length (m)
Digital Crack Length (m)
Error (%)
40
1.05
1.43
36.19
43
0.65
1.29
98.46
92
0.80
1.07
33.75
121
3.36
4.48
33.33
Table 7
Area results for images with crack-like objects
Picture Number
Actual Crack Area (\({{\text{m}}}^{2}\))
Digital Crack Area (\({{\text{m}}}^{2}\))
Error (%)
1
0.55
0.89
61.82
107
1.33
2.18
63.91

Conclusions

In this paper, an automated computer vision-based program for crack inspection has been developed. The program used images taken by smartphone for pavement and GNSS data. Those images were processed by the proposed image processing algorithm, which detects cracks and their locations. Later, the program used GNSS data and crack locations to compute crack lengths and global coordinates. The program was evaluated by a field study performed on rigid pavement. The results show that the proposed crack detection algorithm can also detect other types of distress, such as pop-outs and punch-outs. The crack quantification process shows that the program can compute crack lengths and areas with minimum error values, especially if input images have no stains or noises. The image-processing algorithm's precision, recall, and accuracy values are 57.00%, 98.81%, and 65.22%, respectively. The recall value shows that the algorithm can barely misdetect cracks. However, precision and accuracy values show the disadvantage of the program, which is detecting crack-like objects such as stains and joints. Compared to traditional inspection, the program fulfills the effort-saving, cost-effective, and quality achievement of the inspection.
To improve the practicality and the accuracy of the program, deep learning combined with the proposed image processing techniques algorithm will be proposed as a mobile application.

Declarations

Conflicts of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article and they have no relevant financial or non-financial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
Zurück zum Zitat Gonzalez RC, Woods RE (2007) Digital image processing, 3rd edn. Prentice-Hall Inc, USA Gonzalez RC, Woods RE (2007) Digital image processing, 3rd edn. Prentice-Hall Inc, USA
Zurück zum Zitat Maling DH (2013) Coordinate systems and map projections. Elsevier Maling DH (2013) Coordinate systems and map projections. Elsevier
Zurück zum Zitat Pukanska K (2013) 3D visualisation of cultural heritage by using laser scanning and digital photogrammetry. VSB-Technical University of Ostrava, Czech Republic Pukanska K (2013) 3D visualisation of cultural heritage by using laser scanning and digital photogrammetry. VSB-Technical University of Ostrava, Czech Republic
Zurück zum Zitat Van Rossum G, Drake Jr FL (1995) Python tutorial. Centrum Wiskunde & Informatica, The Netherlands Van Rossum G, Drake Jr FL (1995) Python tutorial. Centrum Wiskunde & Informatica, The Netherlands
Metadaten
Titel
Positioning and detection of rigid pavement cracks using GNSS data and image processing
verfasst von
Ahmed A. Nasrallah
Mohamed A. Abdelfatah
Mohamed I. E. Attia
Gamal S. El-Fiky
Publikationsdatum
02.02.2024
Verlag
Springer Berlin Heidelberg
Erschienen in
Earth Science Informatics / Ausgabe 2/2024
Print ISSN: 1865-0473
Elektronische ISSN: 1865-0481
DOI
https://doi.org/10.1007/s12145-024-01228-3

Weitere Artikel der Ausgabe 2/2024

Earth Science Informatics 2/2024 Zur Ausgabe

Premium Partner