Skip to main content
Top
Published in: International Journal of Intelligent Transportation Systems Research 1/2021

Open Access 21-01-2020

Detection of Damaged Stop Lines on Public Roads by Focusing on Piece Distribution of Paired Edges

Authors: Takuma Ito, Kyoichi Tohriyama, Minoru Kamata

Published in: International Journal of Intelligent Transportation Systems Research | Issue 1/2021

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this study, a system for detecting stop lines on roads with damaged paint is developed to enhance a digital map localization system. Existing methods to detect stop lines focus on features such as straight edges and adequate size; however, these methods are not suitable to be used in rural areas because the paint of stop lines on the roads is damaged sometimes. In addition, lane marks, which are focused on by other existing methods, are often not present on actual roads in rural areas. Thus, to enable the detection of stop lines in the absence of conditions necessary for using the abovementioned features, we focus on pieces of faint features of damaged stop lines. First, we extract the positive and negative edges from an inverse perspective mapped image of the camera input by using a Sobel filter. Next, we verify the pairs of positive and negative edges from the trinarized edge image by confirming the width between both edges. Subsequently, we detect the candidates of stop lines by analyzing the distribution of the line segments extracted by the Hough transformation. In addition, we combine the data of the estimated driving distance and the result of detection of the preceding vehicles with the proposed system to prevent false detections in terms of bicycle crossing lanes and preceding vehicles. The damaged stop lines are detected eventually using these processes. To evaluate the performance of the proposed method, we collect driving data on actual public roads. The results of offline evaluations confirm that the proposed system can detect all target stop lines without any false detections, at a reasonable speed. The findings of this study are expected to contribute to the realization of intelligent vehicles on community roads.
Notes

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Recent advanced driver assistance systems (ADAS) such as warning and vehicle control systems and automated driving technologies require precise location information on a digital map. Stop lines are a beneficial clue for intelligent vehicles to localize on the digital map [1, 2], and thus, their detection is important. To this end, several methods of stop-line detection have been developed. For example, Marita et al. [3] developed a method of detecting stop lines by using a Hough transformation; to discriminate stop lines from other road marks, model-based reasoning was employed, which focused on the size and proportions of the targets. Li et al. [4] proposed a stop-line detection method based on the detection of other road markings, in which the detection results of crosswalks limited the region of interest (ROI) to detect the stop lines. Subsequently, the stop lines were extracted via horizontal mathematical morphological dilation and erosion. Seo et al. [5] proposed a stop-line detection method based on the results of lateral and longitudinal lane-marking detection. Similarly, Suhr et al. [6] developed a stop-line detection method based on the detection of longitudinal lane markings and their connectivity. Lee et al. [7] proposed a method of detection of stop lines that employed Canny edge detection. Lin et al. [8] focused on the corner shape of a stop line connected with lane markings and developed a stop-line detection method based on the detection of corner regions by using a deep-learning neural network. Wang et al. [9] proposed a deep convolutional-neural-network-based model to detect stop lines as well as traffic lights and crosswalks.
In general, determining characteristic features of the targets is important for detecting road marks. For example, when detecting crosswalks [1014], the rectangular shape of a white band with clear side edges and the regularly repeating distribution of the white bands are typical features. Because these features are relatively complex, they are key for detecting crosswalks. However, because the shape of a stop line is simple in comparison to that of other road markings, false detection of stop lines can easily occur. For instance, Wang et al. [9] reported that the detection accuracy of crosswalks and traffic lights when using their proposed method was relatively high (90%); however, the detection accuracy of stop lines was relatively low (60%). Thus, countermeasures for preventing false detections are necessary.
The existing methods for preventing false detections can be classified into those involving the assumption of ideal shapes and those involving the assumption of positional relation with other road markings. In terms of the former assumption, for example, in the approach of Marita et al. [3], a model of the ideal shape of stop lines was assumed. Similarly, in the method of Lee et al. [7], a horizontal clear edge of stop lines was assumed. For stop lines that are well-maintained, as shown in Fig. 1, features pertaining to the ideal shape can be effectively employed for detection. However, road markings, including stop lines on actual public roads are sometimes not well-maintained, as shown in Fig. 2. In such situations, because the stop lines are often divided into pieces, the shape of the stop lines is different from that of ideal stop lines. Thus, addressing this problem is necessary to enable the detection of stop lines on actual public roads.
In terms of the latter assumption, for example, in the method of Li et al. [4], a relative position to the crosswalk was assumed. The methods of Seo et al. [5], Suhr et al. [6], and Lin et al. [8] assumed the presence of connections of stop lines with the lane marks. Although these assumptions are effective on roads in urban areas, community roads often have neither crosswalks nor lane markings. Thus, detection methods that do not employ the positional relation with other road marks are required.
Considering this background, in this study, we aimed to develop a new system for the detection of damaged stop lines on actual public roads that does not involve the assumptions of ideal shapes or positional relations. The detection of damaged stop lines without such assumptions is an essential aspect to enable localization on community roads. Thus, the findings of this study are expected to contribute to the realization of intelligent vehicles on community roads.
The remaining paper is organized as follows: The system design is described in Section 2. The offline evaluations of the proposed system using actual driving data on public roads are described in Section 3. Finally, the conclusions and scope of future work are presented in Section 4.

2 System Design

2.1 Conceptual Design

In our previous study [1], we developed a system for localization on a digital map, in which an intelligent vehicle detected stop lines and guardrails as landmarks for localization. However, in our previous study, the experiments were conducted on a private test course that had only well-maintained stop lines. The implemented function for detecting stop lines was elementary, and it could not detect damaged stop lines on actual public roads. Thus, in this study, we aimed to develop an improved system for the detection of stop lines as an enhanced component for our localization system. Figure 3 shows a conceptual schematic of a localization system based on stop-line detection that is supported by a digital map. In this study, we aimed to develop a stop-line detection system as a first part, on the assumption of the support from a digital map as a later part.
When designing stop-line detection systems, it is necessary to reduce the number of missed and false detections and increase the number of positive ones. In this study, positive detection means that the system outputs the final detection results by using the detection process in the case when a stop line actually exists, and missed detection means that the system does not output the final detection results even though a stop line actually exists. In contrast, false detection means that the system outputs the final detection results even in the case that a stop line does not exist. Mistaking other road markings such as crosswalks for stop lines is an example of false detections. In general, a tradeoff between missed and false detections exists. For example, a horizontal long straight line, which can be extracted by the processes of edge detection and Hough transformation, is one of the characteristics of stop lines. Thus, the parameter of the assumed line length used in Hough transformation plays a key role in the tradeoff. If we assign a large value to the parameter, which indicates a long line, the number of missed detections increases because damaged stop lines may be divided. However, if we assign a small value, which indicates a short line, the number of false detections increases because the system also considers similar edges such as other road markings and patchy surfaces of repaired roads. Figures 4 and 5 show examples of patchy surfaces of repaired roads. Thus, first, determining a suitable balance of false and missed detections according to the purpose and usage is important. Considering this aspect, our system is deemed to be used as a basic component for localization. Furthermore, a digital map can exclude false detections when stop-line data are not registered. Thus, in this study, reducing the number of missed detections is more important than reducing the number of false detections. Considering this aspect, as many stop lines as possible must be detected to increase the chance of localization. Specifically, we focus on the pieces of faint features of stop lines to reduce the number of missed detections, although focusing on these features may provoke false detections.
However, the digital map cannot exclude a false detection near a true stop line although it can exclude a false detection at a distance from a true stop line. Because other road markings are usually present around the stop lines, mistaking them for the true stop line must be avoided. For example, in Japan, some crossings have bicycle crossing lanes beside a crosswalk, although the Japanese National Police Agency recently began removing the bicycle crossing lanes. Figure 6 shows an example of a bicycle crossing lane near a stop line. Although the bicycle crossing lane is represented as a dashed line in some countries, it is a continuous line in Japan. Because the shape characteristics of bicycle crossing lanes are almost the same as those of stop lines, countermeasures are necessary to prevent false detections. An approach to overcome this problem is to exclude candidates within a certain area after the first detection of a stop line because bicycle crossing lanes, if they exist, are located behind the stop lines.
In addition, the bumper of a preceding vehicle is also an object that the detection system may mistake for a stop line around a true stop line. This is because the system aims to detect stop lines by focusing on pieces of faint features. The approach for addressing this problem is to exclude false detection by using the camera-based detection results of a preceding vehicle. Because recent ADAS can often detect a preceding vehicle, we believe that this information is a practical supplement for detecting stop lines.

2.2 System Outline

A stop-line detection system was developed using the basic concept discussed in Section 2.1. Figure 7 shows a flowchart of the stop-line detection algorithm. The process flow is roughly divided into four steps from the preprocess to vehicle removal, as shown on the left side of Fig. 7. Regarding the main part of the detection, the algorithm mainly focuses on the piece distribution of the paired horizontal edges, which consist of positive and negative edges. Figure 8 shows the conceptual schematic of positive and negative edges. In this study, positive edges represent the boundary from the dark pixels of the upper side to the bright ones of the bottom side. In contrast, negative edges represent the boundary from the bright pixels of the upper side to the dark ones of the bottom side. Thus, the upper sides of the road marks exhibit positive edges, whereas the lower sides exhibit negative edges. The algorithm detects the stop lines by using several steps to extract these features. Although this system contains some components used in existing approaches, such as inverse perspective mapping, horizontal edge extraction, and Hough transformation, some new components were introduced for excluding false detections, for example, edge trinarization, edge pair verification, positional confirmation, and preceding vehicle removal. The details of each step are described in the following sections. In the implementation, we used the Robot Operating System (ROS) and OpenCV libraries.

2.3 Preprocess

The first step is the preprocess, which involves obtaining the camera input and performing inverse perspective mapping, as shown in Fig. 7. Figure 2 shows an example of a camera input image. The proposed system involves a monocular camera (GS3-U3-15S5C-C, produced by FLIR) installed inside the windshield of an experimental vehicle. Because the system aims to detect various traffic elements such as road markings, other traffic participants, and traffic lights using only a single camera, a color camera with a resolution of 1280 × 960 is used. However, because the following process needs only grayscale information, we convert the color input image to a grayscale image. Then, we perform inverse perspective mapping and resize the image to 640 × 480. Figure 9 shows an example of the inverse perspective mapped image.

2.4 Stop-Line Candidate Detection

The second step starts with horizontal edge extraction. A Sobel filter is used to extract the horizontal edges from the inverse perspective mapped images. Figure 10 shows an example of the horizontal edge images. The grey areas do not contain any edges. In contrast, the black and white lines represent horizontal edges. Specifically, the black lines represent negative edges whereas the white lines represent positive edges.
To determine the positive and negative edges more clearly, we trinarize the horizontal edge images. Figure 11 shows a conceptual schematic of the trinarization of an edge image. The leftmost image in Fig. 11 shows a part of the horizontal edge image, and the red line indicates the cross-section line. The left graph indicates the scaled edge value along the cross-section line. The negative and positive values correspond to the negative and positive edges, respectively. Subsequently, we trinarize the local extrema, the absolute values of which are larger than a certain value. The right graph indicates the result of the trinarization process, and the rightmost image represents the corresponding part of the trinarized image. The trinarization process is repeated for all vertical cross-section lines over the image. Figure 12 shows an example of a trinarized horizontal edge image. The black and white lines respectively indicate the trinarized negative and positive edges.
Although the trinarized horizontal edge image contains the edges of actual stop lines, it also contains the edges of other road marks such as parts of crosswalks. To exclude the edges of non-targets, the positional relation between the positive and negative edges is observed. For all vertical cross-section lines, the pixels are scanned from the bottom side. If a pixel of negative edges is located under a pixel of positive edges within a certain distance, the negative pixel is considered to be a part of the verified pairs.
Because we assume that the paint of stop lines on the road is damaged, the width between both edges may not be ideal. Thus, we use a relatively relaxed threshold value for the width confirmation, although this threshold value cannot exclude all non-target edges. This verification process is repeated for all vertical cross-section lines over the image. Figure 13 shows an example of the verified paired edges, in which the white line indicates the verified paired edges. As shown in Fig. 13, the edges of the stop lines are adequately extracted. However, because the size of the damaged road marks of parts of a crosswalk is accidentally nearly the same as that of actual stop lines, the image of verified paired edges shown in Fig. 13 still includes the edges of non-targets.
To exclude the remaining edges of non-targets, line segments with lengths larger than a certain threshold are selected by using a probabilistic Hough transformation. Figure 14 shows an example of the extracted line segments, which indicates that the edges of non-targets are adequately removed. Figure 15 shows an example of a sequentially processed images. Subsequently, we determine the number of pixels of the extracted line segments for each height, and measure the peak height of the areas that contain the maximum number of pixels. Subsequently, the line segments distributed around the peak height are grouped, and both ends of the stop line are determined by considering the leftmost and rightmost pixels in the group. If the number of extracted pixels around the peak height is larger than a certain value, and if the length of the candidate is larger than a certain value, the candidate is considered valid. Using the results obtained from these processes, we confirm the shape of a candidate of a stop line for a certain frame. In addition, to reduce the number of unexpected false detections, we track the detected results of a few continuous frames. A candidate of a stop line is determined using these processes. Figure 16 shows an example of the final detection result. The green line indicates the detection result that the proposed system automatically draws in the image of the camera input.

2.5 Removal of Bicycle Crossing Lane

The third process involves the removal of the bicycle crossing lane. As discussed in Section 2.1, some large crossings have bicycle crossing lanes. Because the shape characteristics of these lanes are similar to those of stop lines, they cannot be discriminated in the main detection process. To overcome this limitation, we log the data of the estimated driving distance calculated using the time-series velocity, which is obtained from the Control Area Network (CAN) when the vehicle approaches a stop line. Then, to prevent false detections, the output of the main detection process is ignored for a certain distance from the detected stop line. Figure 17 shows the conceptual schematic of the ignored area for preventing false detections pertaining to the bicycle crossing lane. The distance parameter to ignore the candidates was 35 m, considering the size of the intersections. Although we used a constant value in this study, in future studies, it is desirable for this parameter to be variable by referring to the digital map.

2.6 Vehicle Removal

The fourth process involves the removal of vehicles. To detect a preceding vehicle, we use YOLO [15], which is an open-source software for vehicle detection. Using the image of the camera input, YOLO outputs a bounding box that contains the area of the preceding vehicle. If the detected candidate of a stop line passes through the bounding box, we consider the candidate as a false detection and the final detection results are not output.
Figure 18 shows an example of vehicle removal. The left image is a typical example in which the main detection process easily makes a mistake because the color of the bumper is relatively bright, and the edge is relatively straight. The blue line in the right image shows the temporary false detection result, and the red rectangle shows the bounding box detected by the vehicle detection module. In this case, because the line candidate is inside the bounding box, the result is considered a false detection.

3 Evaluation

3.1 Evaluation Data

In some existing studies regarding computer vision, open benchmark data sets were used for evaluating the proposed methods. However, such data sets contain driving data for European countries where some road markings are different from those in Japan. Because the proposed method is developed considering the traffic situation in Japan, such benchmark data sets could not be used. Thus, to evaluate the performance of the proposed system, we collected driving data, which contained front monocular camera images at 15 Hz and the estimated driving distance at approximately 100 Hz, for actual public roads. As a first step, the roads near the university were set as the evaluation course. Specifically, two evaluation courses were prepared: a clockwise course and a counterclockwise course. Figure 19 shows an aerial photographic map of the evaluation course. This figure is based on a map image published by the Geospatial Information Authority of Japan [16]. Figures 1, 2, 4, 5, and 6 correspond to these courses. Because these courses consist of municipal roads, the surface of the roads was not maintained as high-standard roads are, such as highways. The total length of the course was approximately 3 km.
Although this course contained many stop lines, the evaluation targets were 12 stop lines which the experimental vehicle drove over in a straight manner. Therefore, for example, the stop lines on both sides of the crossings were not evaluation targets. We collected the image data for this course using a monocular camera and the data of estimated driving distance. The sensor data for both courses was collected three times to confirm the reproducibility of detecting stop lines to a certain degree, although the number of trials was limited. The weather was slightly cloudy; thus, only a few shadows of surrounding road construction were present. After the collection, we evaluated the offline performance of the proposed system.

3.2 Evaluation of Computational Speed

We evaluated the computational speed of the proposed system corresponding to the stop-line detection by measuring the average calculation speed across 50 frames. The detection process of stop lines, which does not include the detection process of a preceding vehicle, took 0.021 s (47.6 Hz) on average when using a desktop PC (3.0-GHz CPU under the condition of hyperthreading). Because the frame rate of the camera in the proposed system was 15 fps, real-time detection of a stop line could be realized. In this manner, the processing speed of the proposed system was confirmed to be practical.

3.3 Comparison between Detection Results of Proposed Method and Conventional Methods

To evaluate the detection performance of the proposed method, we compared the detection results obtained using the proposed method with those obtained using conventional methods. The characteristic aspect of the proposed system was the reduction of missed and false detections by employing the processes of edge trinarization, edge pair verification, positional confirmation, and preceding vehicle removal. Thus, for comparison, conventional methods that did not include the abovementioned processes were considered. Specifically, the conventional methods only included the processes of inverse perspective mapping, horizontal edge extraction of negative edges, and Hough transformation. In addition, to discuss the effect of assuming an ideal shape of stop lines, two settings were considered for the comparison methods, specifically, conventional methods 1 and 2. In the process of Hough transformation, the parameter of minimum line length was considered. In our inverse perspective mapped images, the length of a single lane was larger than 110 pixels. Because our proposed method aimed to select as many small line segments as possible, the parameter of minimum line length for the proposed method was set as 5 pixels, which represented a substantially short length. For conventional method 1, the same value as that for the proposed system was assigned. However, for conventional method 2, a value of 66 pixels was assigned, which corresponded to 0.6 times the minimum lane width. Thus, conventional method 2 assumed a relatively ideal shape of stop lines whereas conventional method 1 did not involve such an assumption.
Table 1 presents a summary of the detection results. Three types of detection results: positive, missed, and false were considered. To account for the false detection, we checked the objects that the systems mistook for stop lines, and these objects were classified into five categories: crosswalks, other road markings, patchy road surfaces, bicycle crossing lanes, and preceding vehicles. In terms of the positive detection, the proposed method and conventional method 1 could detect all the stop lines whereas conventional method 2 could detect only approximately half of the stop lines. Because the proposed method and conventional method 1 did not assume the ideal shape of stop lines, they could detect stop lines even in cases involving damaged paint. However, conventional method 1 generated several false detections; because conventional method 1 detected small edges without paired edge verification, it mistook crosswalks, other road markings, and patchy road surfaces of repaired road for stop lines. In this regard, the proposed method could prevent false detections owing to paired edge verification, whereas conventional method 2 could prevent false detections owing to the assumptions of ideal shape. Considering the bicycle crossing lane, the proposed method could prevent false detections owing to the confirmation of estimated driving distance, whereas both the conventional methods led to false detections. In terms of the preceding vehicle, the proposed method could prevent false detections owing to supplemental information being obtained from the detection module of the preceding vehicle, whereas both conventional methods led to false detections. In summary, conventional method 1, which did not assume the ideal shape of stop lines and did not include additional modules for preventing false detections, caused many false detections whereas conventional method 2, which assumed an ideal shape without additional modules, caused many missed detections. In contrast, the proposed method, which did not assume an ideal shape and included additional modules for preventing false detections, could prevent missed and false detections. The following subsections present the details of some characteristic cases.
Table 1
Summary of detection results
 
Proposed method
Conventional method 1
Conventional method 2
Course
CW
CCW
CW
CCW
CW
CCW
Positive detection
36/36
36/36
36/36
36/36
20/36
18/36
Missed detection
0/36
0/36
0/36
0/36
16/36
18/36
False detection
Crosswalk
0
0
34
35
0
0
Other road markings
0
0
45
12
0
0
Patchy road surface
0
0
50
26
0
0
Bicycle crossing lane
0
0
9
9
6
4
Preceding vehicle
0
0
7
6
0
2

3.4 Example of Positive Detection

Figure 20 shows an example of a damaged stop line, which the proposed system could detect correctly. Although the paint on the road was damaged, as shown in Fig. 20, the proposed system detected only the target stop line. Figure 21 shows the midstream images of each process. Figures 21-A, 21-B, 21-C, and 21-D show the inverse perspective mapped image, trinarized horizontal edge image, verified paired edge image, and line segments extracted by the Hough transformation, respectively. As shown in Fig. 21-A, several road markings are present. In addition, because the paint of the crosswalk is relatively damaged, many positive and negative edges are extracted, as shown in Fig. 21-B. However, owing to the process of edge pair verification, a large portion of the extracted edges is removed, as shown in Fig. 21-C. Furthermore, the remaining small edges are also removed by the process of probabilistic Hough transformation, as shown in Fig. 21-D. In addition, the proposed system removed the remaining edges of non-targets by confirming pixel distribution. As a result, the proposed system could output a suitable final detection result without false detections, as shown in Fig. 20.

3.5 Prevention of False Detection Pertaining to Bicycle Crossing Lane

Figures 22 and 23 show examples of a bicycle crossing lane near a stop line. Although the paint of both the stop line and bicycle crossing lane exhibits certain damage, the proposed method could detect them as candidates of stop lines. Thus, our proposed system referred to the estimated driving distance data. First, when the vehicle approached the stop line, the proposed system logged the value of the estimated driving distance. For the situations shown in Figs. 22 and 23, the estimated driving distances were 3733.6 m and 3741.8 m, respectively. Because the distance from the first candidate of the stop line shown in Fig. 22 to the second candidate shown in Fig. 23 was smaller than the assigned threshold parameter, the proposed system considered the second candidate as a false one and prevented a false detection.

3.6 Prevention of False Detection Pertaining to Preceding Vehicle

Figure 24 shows an example of the prevention of false detection pertaining to a preceding vehicle. Similar to in Fig. 18, the blue line represents the temporary false detection result, and the red rectangle shows the bounding box detected by the vehicle detection module. Figure 25 shows the midstream images. As shown in Fig. 25-A, the rear part of the preceding vehicle in this case is similar to the actual stop line, from the perspectives of color and shape. Thus, the considered processes could not eliminate the false detection, as shown in Figs. 25-B, 25-C, and 25-D, and our proposed system considered the line segments as candidates of the stop line. However, because the candidate is inside the bounding box of the detected preceding vehicle, our proposed system finally considered this candidate as a false one and prevented a false detection.

4 Conclusions

In this study, we developed a detection system for damaged stop lines on actual public roads without the assumption of an ideal shape or connectivity with the lane marks. First, we focused on the positional relation between the positive and negative edges as a clue for detecting the verified edge pairs. Next, we detected the candidates of stop lines by considering the distribution of the pixels of line segments extracted by a Hough transformation. Furthermore, to prevent the false detection pertaining to bicycle crossing lanes and a preceding vehicle, we verified the detected candidates using the data of the estimated driving distance and detection result of the preceding vehicles. In addition, as an initial test, we evaluated the offline performance of the proposed system by using actual driving data corresponding to public roads. The results demonstrated that the proposed system could detect all the target stop lines without any false detections at a practical computational speed.
However, because the situations of evaluation in this study were limited, further evaluation on a wider variety of roads is necessary. In addition, because the evaluation data were collected on a cloudy day, only a few shadows of surrounding road constructions, which may adversely affect the detection results, were present. Thus, evaluation and further improvement regarding shadows on sunny days constitute desired future work. Moreover, combining the proposed system with actual applications of intelligent vehicles such as localization systems and ADAS is also necessary.

Acknowledgments

This study was supported by a part of the research project “Autonomous Driving System to Enhance Safe and Secured Traffic Society for Elderly Drivers” granted by the Japan Science and Technology Agency(JST), S-Innovation (Strategic Promotion of Innovative Research and Development). The authors would like to thank the agency for providing financial support. The authors would also like to thank KPIT Technologies Ltd. for their contributions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Our product recommendations

ATZelectronics worldwide

ATZlectronics worldwide is up-to-speed on new trends and developments in automotive electronics on a scientific level with a high depth of information. 

Order your 30-days-trial for free and without any commitment.

ATZelektronik

Die Fachzeitschrift ATZelektronik bietet für Entwickler und Entscheider in der Automobil- und Zulieferindustrie qualitativ hochwertige und fundierte Informationen aus dem gesamten Spektrum der Pkw- und Nutzfahrzeug-Elektronik. 

Lassen Sie sich jetzt unverbindlich 2 kostenlose Ausgabe zusenden.

Literature
1.
go back to reference Ito, T., Mio, M., Tohriyama, K., Kamata, M.: Novel Map Platform based on Primitive Elements of Traffic Environments for Automated Driving Technologies. International Journal of Automotive Engineering. 7(4), 143–151 (2016)CrossRef Ito, T., Mio, M., Tohriyama, K., Kamata, M.: Novel Map Platform based on Primitive Elements of Traffic Environments for Automated Driving Technologies. International Journal of Automotive Engineering. 7(4), 143–151 (2016)CrossRef
2.
go back to reference Kim, D., Kim, B., Chung, T., Yi, K.: Lane-Level Localization Using an AVM Camera for an Automated Driving Vehicle in Urban Environments. IEEE/ASME Transactions on Mechatronics. 22(1), 280–290 (2017)CrossRef Kim, D., Kim, B., Chung, T., Yi, K.: Lane-Level Localization Using an AVM Camera for an Automated Driving Vehicle in Urban Environments. IEEE/ASME Transactions on Mechatronics. 22(1), 280–290 (2017)CrossRef
3.
go back to reference Marita T, Negru M, Danescu R, Nedevschi S (2011) Stop-line detection and localization method for intersection scenarios. 2011 Intelligent Computer Communication and Processing, pp. 293–298 Marita T, Negru M, Danescu R, Nedevschi S (2011) Stop-line detection and localization method for intersection scenarios. 2011 Intelligent Computer Communication and Processing, pp. 293–298
4.
go back to reference H. Li, M. Feng, and X. Wang (2012) Inverse perspective mapping based urban road markings detection. 2012 Cloud Computing and Intelligent Systems, pp. 1178–1182 H. Li, M. Feng, and X. Wang (2012) Inverse perspective mapping based urban road markings detection. 2012 Cloud Computing and Intelligent Systems, pp. 1178–1182
5.
go back to reference Y. W. Seo, and R. Rajkumar, “A vision system for detecting and tracking of stop-lines,” 2014 Intelligent Transportation Systems, pp. 1970–1975 (2014) Y. W. Seo, and R. Rajkumar, “A vision system for detecting and tracking of stop-lines,” 2014 Intelligent Transportation Systems, pp. 1970–1975 (2014)
6.
go back to reference Suhr, J. K., Jung, H. G., “Fast symbolic road marking and stop-line detection for vehicle localization,” 2015 Intelligent Vehicles Symposium, pp. 186–191 (2015) Suhr, J. K., Jung, H. G., “Fast symbolic road marking and stop-line detection for vehicle localization,” 2015 Intelligent Vehicles Symposium, pp. 186–191 (2015)
7.
go back to reference Lee BH, Im SH, Heo, M. B., Jee, G. I. (2015) Curve modeled lane and stop line detection based GPS error estimation filter,” 2015 Intelligent vehicles symposium, pp. 406–411 Lee BH, Im SH, Heo, M. B., Jee, G. I. (2015) Curve modeled lane and stop line detection based GPS error estimation filter,” 2015 Intelligent vehicles symposium, pp. 406–411
8.
go back to reference Lin GT, Santoso PS, Lin CT, Tsai CC, Guo JI (2017) Stop line detection and distance measurement for road intersection based on deep learning neural network,” 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 692–695 Lin GT, Santoso PS, Lin CT, Tsai CC, Guo JI (2017) Stop line detection and distance measurement for road intersection based on deep learning neural network,” 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pp. 692–695
9.
go back to reference Wang Q, Liu Y, Liu J, Gu Y, Kamijo S (2018) Critical Areas Detection and Vehicle Speed Estimation System Towards Intersection-Related Driving Behavior Analysis,” 2018 IEEE International Conference on Consumer Electronics, pp. 1–6 Wang Q, Liu Y, Liu J, Gu Y, Kamijo S (2018) Critical Areas Detection and Vehicle Speed Estimation System Towards Intersection-Related Driving Behavior Analysis,” 2018 IEEE International Conference on Consumer Electronics, pp. 1–6
10.
go back to reference Suzuki S, Raksincharoensak P, Shimizu I, Nagai M, Adomat R (2010) Sensor Fusion-Based Pedestrian Collision Warning System with Crosswalk Detection,” 2010 Intelligent Vehicles Symposium, pp. 355–360 Suzuki S, Raksincharoensak P, Shimizu I, Nagai M, Adomat R (2010) Sensor Fusion-Based Pedestrian Collision Warning System with Crosswalk Detection,” 2010 Intelligent Vehicles Symposium, pp. 355–360
11.
go back to reference Sichelschmidt S, Haselhoff A, Kummert A, Roehder M, Elias B, Berns K (2010) Pedestrian Crossing Detecting as a part of an Urban Pedestrian Safety System,” 2010 Intelligent Vehicles Symposium, pp. 840–844 Sichelschmidt S, Haselhoff A, Kummert A, Roehder M, Elias B, Berns K (2010) Pedestrian Crossing Detecting as a part of an Urban Pedestrian Safety System,” 2010 Intelligent Vehicles Symposium, pp. 840–844
12.
go back to reference Haselhoff A, Kummert A (2010) On Visual Crosswalk Detection for Driver Assistance Systems,” 2010 Intelligent Vehicles Symposium, pp. 883–888 Haselhoff A, Kummert A (2010) On Visual Crosswalk Detection for Driver Assistance Systems,” 2010 Intelligent Vehicles Symposium, pp. 883–888
13.
go back to reference Foucher P, Sebsadji Y, Tarel JP, Charbonnier P, Nicolle P (2011) Detection and Recognition of Urban Road Markings Using Images,” 2011 4th International IEEE Conference on Intelligent Transportation Systems, pp. 1747–1752 Foucher P, Sebsadji Y, Tarel JP, Charbonnier P, Nicolle P (2011) Detection and Recognition of Urban Road Markings Using Images,” 2011 4th International IEEE Conference on Intelligent Transportation Systems, pp. 1747–1752
14.
go back to reference Zhai Y, Cui G, Gu Q, Kong L (2015) Crosswalk Detection Based on MSER and ERANSAC,” 2015 18th International IEEE Conference on Intelligent Transportation Systems, pp. 2770–2775 Zhai Y, Cui G, Gu Q, Kong L (2015) Crosswalk Detection Based on MSER and ERANSAC,” 2015 18th International IEEE Conference on Intelligent Transportation Systems, pp. 2770–2775
15.
go back to reference Redmon J, Farhadi A (2017) YOLO9000: Better, faster, stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525 Redmon J, Farhadi A (2017) YOLO9000: Better, faster, stronger,” 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525
Metadata
Title
Detection of Damaged Stop Lines on Public Roads by Focusing on Piece Distribution of Paired Edges
Authors
Takuma Ito
Kyoichi Tohriyama
Minoru Kamata
Publication date
21-01-2020
Publisher
Springer US
Published in
International Journal of Intelligent Transportation Systems Research / Issue 1/2021
Print ISSN: 1348-8503
Electronic ISSN: 1868-8659
DOI
https://doi.org/10.1007/s13177-020-00220-7

Other articles of this Issue 1/2021

International Journal of Intelligent Transportation Systems Research 1/2021 Go to the issue

Premium Partners