Skip to main content
Top
Published in: Chinese Journal of Mechanical Engineering 1/2020

Open Access 01-12-2020 | Original Article

Discerning Weld Seam Profiles from Strong Arc Background for the Robotic Automated Welding Process via Visual Attention Features

Authors: Yinshui He, Zhuohua Yu, Jian Li, Lesheng Yu, Guohong Ma

Published in: Chinese Journal of Mechanical Engineering | Issue 1/2020

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the robotic welding process with thick steel plates, laser vision sensors are widely used to profile the weld seam to implement automatic seam tracking. The weld seam profile extraction (WSPE) result is a crucial step for identifying the feature points of the extracted profile to guide the welding torch in real time. The visual information processing system may collapse when interference data points in the image survive during the phase of feature point identification, which results in low tracking accuracy and poor welding quality. This paper presents a visual attention feature-based method to extract the weld seam profile (WSP) from the strong arc background using clustering results. First, a binary image is obtained through the preprocessing stage. Second, all data points with a gray value 255 are clustered with the nearest neighborhood clustering algorithm. Third, a strategy is developed to discern one cluster belonging to the WSP from the appointed candidate clusters in each loop, and a scheme is proposed to extract the entire WSP using visual continuity. Compared with the previous methods the proposed method in this paper can extract more useful details of the WSP and has better stability in terms of removing the interference data. Considerable WSPE tests with butt joints and T-joints show the anti-interference ability of the proposed method, which contributes to smoothing the welding process and shows its practical value in robotic automated welding with thick steel plates.

1 Introduction

Robotic automated arc welding processes need different types of sensors to acquire various useful information for welding state monitoring and control of the welding torch, etc. [1, 2]. Of these sensors, vision sensors are the most widely used [3], and laser vision sensors are commonly employed to detect the weld seam profile (WSP) in robotic thick-steel-plate welding. To implement multipass welding real-time weld seam profile extraction (WSPE) is an indispensable step [4], which makes guiding the welding torch possible using the identified feature points of the extracted WSP. It is true that there are considerable adverse factors in weld seam profile extraction, such as arc flash, fume, and spatter. These lead to the captured images with different interference. It is thus critical that effective methods are presented to extract the WSP from interference background for overcoming the adverse factors. Otherwise, the visual information processing system may provide the false tracking position for the welding torch during the information extraction process.
Different joints result in the various appearance of the WSP in the captured image. A review on how to eliminate the interference data for WSPE with typical butt, fillet, and lap joints, is given. To recognize the image coordinates of seam center, fast template matching and fast Hough transform were presented in Ref. [5]. Also, to extract feature points of V-shaped welding seams, an improved Otsu algorithm and a line detection algorithm were employed by Jawad et al. [6]. Fan et al. [7] extracted the butt welding center and laser stripe by row scanning and column scanning respectively. To stay robust against heavy noise a multilayer hierarchy vision processing architecture integrated with an effective bottom-up and top-down combined inference algorithm was developed in Ref. [8]. Faster R-CNN algorithm is also proposed to separate the WSP from background and eliminate interference of noise [9]. Liu et al. [10] presented a series of preprocessing operations, such as power transformation, limited contrast histogram equalization, top-hat operation, and unidirectional structuring element cascade filtering for WSPE in robotic underwater welding, and the pre-processed image was further segmented by the mean shift algorithm. Du et al. [11] proposed a three-stage algorithm to extract the WSP, in which the potential WSP regions were first selected, and the regions were then ranked with their corrected scene irradiance densities, and column-wise peak detection was finally performed using the ranks.
It is noted that there is a common scheme that is used to extract the WSP, in which the region of interest (ROI) is first determined to reduce the computation load, and various filters are then used to clear up noise, such as median filters [12, 13], Gaussian filters [14, 15], and Wiener filters [16], and thresholding and denoising are presented to remove the interference data points. There is no literature that introduces how to further eliminate the interference data points for WSPE with V-grooves when they remain after denoising.
The above research on WSPE is confined to V-grooves of butt joints, and the following review concentrates on WSPE of lap joints. In order to determine the region of interest, Radon transform is applied to the captured image, and median filtering and thresholding are then also used for denoising [17, 18]. Zhang et al. [19] used the similar method to extract WSP in laser beam welding. Gu et al. [20] proposed a image preprocessing method to eliminate the impact of arc light, light reflection and splashes on the captured image, which includes adaptive threshold segmentation and smoothing.
In our previous research, we presented a great number of methods based on visual attention mechanism to extract WSPs from strong arc background for butt and fillet joints, such as visual saliency [4, 21] and visual attention models [22]. With these methods WSPs can be highlighted from the uneven background with strong arc regions. However, our methods can only strengthen the WSP with regard to intensity. During the subsequent feature point identification of the extracted WSP to guide the welding torch for multipass welding, thresholding is commonly used to further remove interference data points as a crucial step to simplify data processing. There comes an issue that interference data points more or less survive typically when there is the entire arc region also in the image, which always leads to wrong feature points and thus the misappropriate welding position. It is a real challenge to effectively extract the data points of the WSP from random interference data points [20]. Clustering algorithms using the designated Euclidean distance threshold are employed in Refs. [4, 2123] to discern the data points belonging to the WSP in the binary image, which produces many clusters, and the length of the cluster is used to differentiate the segments that belong to the WSP from interference clusters.
In clustering based methods, various clustering tests show that it is very hard to discern the clusters belonging to the WSP from others merely depending on their lengths in space because spatter can be imaging also with big spans in the captured images and randomly occurs. Thresholding is such an important operation that it is often used in the literature for lessening data-processing difficulty, but the fact is that interference data points usually survive more or less after this operation is dealt with. Currently there are few studies that intentionally discuss how to effectively solve this issue. This paper aims at discerning the clusters belonging to the WSP using the visual attention features with which our eyes can easily accomplish the task, and struggles to keep more useful details of the WSP for more effective feature identification of the extracted WSP. Two typical kinds of WSPs with butt joints and T-joints are used to show the anti-interference ability of the proposed method in this paper.

2 Issue Derivation

There is a case in which interference data points survive after some preprocessing algorithms have been applied to images at some sampling time particularly when thresholding is carried out as the final step (the welding system refers to Refs. [21, 22]). Figure 1 illustrates the acquisition process of the image with laser stripes, namely, WSPs. Figure 2 gives the preprocessed results using three methods presented previously in the literature, which shows that the interference data points survive in most cases. Actually these data points lead to fake feature points during the seam tracking (see Figure 3) stage and terminate the automated welding process.
The coming problem is how to automatically and effectively differentiate the data points that belong to the WSP from the surviving interference data points. As we previously presented, a clustering-based method is used to solve this problem (the clustering algorithm refers to Ref. [4] and the Euclidean distance threshold used in this algorithm is set to 2 pixels). In this paper this method is improved with better anti-interference ability, and visual attention features from the visual attention process by our eyes are used together with the corresponding empirical knowledge. Note that the searching sequence of data points used in the related clustering process is from left to right and from up to down shown in Figure 4.

3 Visual Feature Analysis and Selection for Discerning WSPs

In this section visual features that are from the visual attention process by our eyes and used to build a vision-based method for extracting the WSP are analyzed. Three visual features are defined as the main factors that are used to discern the segment of the WSP from the interference data background. The analysis process is organized as weld seam characteristic analysis, visual attention processes, and visual feature selection.
The light on workpiece projected by the laser commonly impresses us with stripe-like whatever the joint is. The width of the stripe is about 5 pixels in the vertical direction in the image. The stripes appear in a continuous way in the horizontal direction. It is a fact that the intensity of the stripe may not be uniform because the projection distances are different, which produces a diverse situation in which some parts of the stripe are weak with regard to intensity, and some pieces of the stripe are lost when thresholding is carried out.
Although the diverse case occurs we can still easily recognize the regions belonging to the WSP with our eyes from the interference background. What gives us the anti-interference ability? The following aspects should be considered. The first one is that the stripe extends naturally, which means that every piece (all pieces actually exist as clusters in clustering results) of the WSP adjoins each other in space. Minimum Euclidean distances between different clusters should be the first factor that influences the visual decision making process of discerning the WSP, which is described as \( D_{j} \) (subscript \( j \) means the jth cluster). In addition, pieces of the WSP adjoining each other means smaller slope differences than those between the piece of the WSP and the interference cluster. Figure 5 illustrates the process of calculating the slope difference \( \overline{{k_{j} }} \).
\( \overline{{k_{j} }} \) is obtained from the following steps. Firstly, linear forms of all clusters are produced with their average positions. Secondly, reference slope \( k_{mean}^{l} \) (\( l \) is the lth identification process) illustrated as the green line in Figure 5c is calculated using the final part (ten points are automatically selected) of the green cluster in Figure 5b, which is supposed to be the last discerned cluster belonging to the WSP. Thirdly, determining \( k_{ave}^{j} \) using two groups of data points is followed: the ten points mentioned before and the initial part (from two to ten points that are also automatically selected in the related algorithm) of every cluster that will be discerned. \( k_{ave}^{j} \) is acquired through averaging the slopes. Finally, \( \overline{{k_{j} }} \) is formulated with Eq. (1):
$$ \overline{{k_{j} }} = \left| {k_{mean}^{l} - k_{ave}^{j} } \right|. $$
(1)
Figure 5c intuitively shows \( \overline{{k_{pink} }} \approx \overline{{k_{purple} }} \to 0 \), which means that clusters marked with pink and purple should be considered as the segments of the WSP with regard to the slope difference.
The third factor that influences the discrimination result of the WSP with our eyes is the length \( L_{j} \) of every cluster, which is defined as the maximum Euclidean distance of each cluster. The reason why this factor is useful as a visual feature is that the pieces pertaining to the WSP usually own bigger sizes in space than the clusters surrounding them, particularly when the adopted threshold is set small in the binaryzation process. Therefore, the cluster with bigger length should be regarded more naturally as part of the WSP when the other factors cannot work. Figure 6 shows the case in which cluster B should be visually discerned as part of the WSP more naturally than cluster A.
This section presents three factors \( D_{j} \), \( \overline{{k_{j} }} \), and \( L_{j} \) as the visual features for imitating the observation process with our eyes to discern the clusters belonging to the WSP.

4 Scheme of Discerning WSPs

The scheme of discerning the WSP from interference clusters comprises three stages using clustering results (see Figure 7). The first stage is thinning all the clusters, which means that using the average heights of these clusters to represent them in the image. The second stage is to separately determine the first piece of the WSP based on the imaging characteristic of the joint profile. Due to different profile features of butt and fillet joints, this paper presents two methods to recognize the initial part of the WSP for the two kinds of joints. Figure 8 shows these requirements that must be satisfied in the methods using the visual features of different WSPs. The last stage is the cyclic identification process, in which only one cluster is discerned as a piece of the WSP in each loop.
In Figure 8a \( Cor_{Initial} \) is the minimum horizontal coordinate of the cluster which satisfies two requirements, and the definitions of \( H_{\text{max} } \), \( H_{\text{min} } \) are illustrated in Figure 8b. In Figure 8c \( H_{t}^{j} \le H_{t + 1}^{j} \) represents that the heights of the data points gradually increase in the jth cluster (subscript \( t \) means different data points of every cluster). \( length(H_{t}^{j} \le H_{t + 1}^{j} ) \) is the number of the data points that satisfy \( H_{t}^{j} \le H_{t + 1}^{j} \), and \( length(slope_{t}^{j} \le 0) \) means the number of the data points with which different lines are created, and the slopes of the lines are non-positive, and \( length(Cluster_{j} ) \) denotes the number of all data points in the jth cluster. Figure 8d illustrates \( slope^{j} \le 0 \).
Note that the rule of “obtain the clusters on the TempCN’s right from TempCN+1 to CN” in the proposed scheme (see Figure 7) is that the percentage of the data points which are on the right of the identified cluster TempCN must be up to 90% each time.

5 Visual Features-Based Strategy for Discerning WSPs from the Interference Background

The proposed strategy using visual features contains two aspects: automatic determining the number of candidate clusters that are to be discerned in each loop (see Figure 9), and rules of identifying the cluster that belongs to the WSP from the candidate clusters in terms of visual continuity using visual features (see Figure 10). The proposed strategy follows the rule: the candidate cluster with larger distance from the last identified cluster is selected as the WSP’s segment with more harsh requirements. \( C_{i3} \) (subscript \( i \) in these requirements means the ith identification process, similarly hereinafter), for example, is selected as the WSP’s segment with four requirements when Selectnum is 3. This rule lessens the probability that fake clusters are determined as part of the WSP. In Figure 10, \( \bar{k}_{i1} \approx \bar{k}_{i2} \) is defined as \( 0.8 \le \frac{{\text{min} (\bar{k}_{i1} ,\bar{k}_{i2} )}}{{\text{max} (\bar{k}_{i1} ,\bar{k}_{i2} )}} \le 1 \), and \( D_{i1} \approx D_{i2} \) means \( 0.8 \le \frac{{\text{min} (D_{i1} ,D_{i2} )}}{{\text{max} (D_{i1} ,D_{i2} )}} \le 1.5 \); \( \frac{{\bar{k}_{i2} }}{{\bar{k}_{i1} }} \le 0.5 \) denotes that \( \bar{k}_{i2} \) is better than \( \bar{k}_{i1} \) with regard to continuity; \( \bar{k}_{i1} \le 0.5 \) is a criterion with which the last cluster is judged as the end part of the WSP in the final identification process. The criterion results from numerous tests on the slope fluctuations of the linear segments in the image. Slope calculation method is formulated as Eq. (2):
$$ k_{j - 8} = {{\sum\limits_{k = 2,4,6,8} {\tfrac{y(j - k) - y(j + k)}{x(j + k) - x(j - k)}} } \mathord{\left/ {\vphantom {{\sum\limits_{k = 2,4,6,8} {\tfrac{y(j - k) - y(j + k)}{x(j + k) - x(j - k)}} } 4}} \right. \kern-0pt} 4},\begin{array}{*{20}c} {} \\ \end{array} j \ge 9, $$
(2)
where \( x( \cdot ) \) and \( y( \cdot ) \) are the coordinates of data points respectively in the horizontal and vertical directions in the image coordinate system, and \( j \ge 9 \) means that the slope calculation process covers 9 data points in space. To judge whether the remaining only one candidate cluster in the final identification process belongs to the WSP, its slope and the slope of the end part of the last discerned WSP (for the V-groove or the T-joint, the end part of the WSP is always linear) are calculated to judge whether they are close to each other. Figure 11 gives an example of how to determine whether the final cluster (if it exists) is part of the WSP using slope difference, in which the only one candidate cluster (see red circle in Figure 11a) should be judged as part of the WSP because the slopes of the data points in red and pink circles are near to each other (see the green circle in Figure 11b).
Figure 11 shows that the slope fluctuation of the linear segments in the image is less than 0.5, which is the reason why the only one requirement \( \bar{k}_{i1} \le 0.5 \) is used during the final identification process.

6 Experimental Results

Different images with lots of interference data are tested to show the effectiveness of the method proposed in this paper. The extraction results (Figure 12) show the anti-interference ability of the proposed method, and other images captured in the different welding processes (see Figure 13) are used to further exhibit this performance.
Two simplified methods that are derived respectively from Refs. [21, 22] are presented to further verify the robust anti-interference ability of the proposed method. Figure 14 shows the first simplified WSPE procedure, and Figure 15 gives the extraction results, in which the raw images with entire arc regions in Figure 2 are selected as the experimental images. Figure 16 shows the second procedure, and Figure 17 gives the corresponding WSPE results.
Figures 15 and 17 show that the proposed visual feature-based method here can effectively discern the data points belonging to the WSP from the complicated interference background and it owns the robust anti-interference ability. Meanwhile, we tested the running time of the proposed method with a normal PC, and it is only 40 ms. The result meets the real-time requirement.

7 Discussion

This paper investigated a method to gradually extract WSPs in binary images from the strong interference background. This method used a strategy refined from the visual attention process by our eyes to discern the pieces belonging to two types of WSPs with typical joints from interference data points based on clustering results. This strategy used three visual attention features, i.e., Euclidean distance, slope difference, and the span of clusters in images to implement the automatic identification process. This method can be used in automated thick plate welding with butt joints and T-joints for high welding efficiency.
To date, there are various methods that are proposed to extract WSPs for the traditional automated arc welding process, such as Refs. [7, 12]. However, the thickness of workpiece presented in these studies is generally less than 30 mm. This reduces the difficulty of effectively extract the WSP. The method proposed in this paper expands the scope of thickness.
In addition, we previously set a length threshold 100 pixels to differentiate the clusters belonging to the WSP from other clusters [22] and remove interference data points, in which the clusters whose lengths (the largest Euclidean distance between two points in a cluster is defined as the cluster’s length) are less than 100 pixels were regarded as interference clusters and removed. Also in Ref. [21] 80 pixels determined from empirical knowledge was the length threshold to eliminate interference data points. In Ref. [4], the same operation was implemented through a two-stage strategy. In the first stage, the clusters whose lengths are less than the average length of all clusters are deemed as interference clusters, and in the second stage another distance threshold produced by the genetic algorithm is used to discern the clusters belonging to the WSP. The above methods can directly remove interference data but the anti-interference capacity of these methods extremely depends on these thresholds, and interference data points may survive because spatter can also produce the clusters with large lengths. However, the method in this paper can indirectly eliminate interference data, and the clusters with small sizes belonging to the WSP can still be kept during the identification process, which contributes to maintaining more useful information of the WSP and leads to better accuracy for subsequent feature identification of the WSP like Ref. [22].
The method in this paper conduces to simplifying the WSP extraction process. Only the operation of Gabor filtering, for example, combing with the proposed method in this paper can extract the WSP different from Refs. [21, 22] that use complicated visual attention models. The proposed method also reduces time cost and waste of materials by smoothening the welding process. Certainly, the proposed method can work better when the preprocessing algorithms are more effective.
In terms of the effectiveness of removing interference data, the difference between the method in this paper and the previous ones is stability. The method in this paper nearly does not keep interference clusters with large sizes whereas the previous methods are the opposite.
There is still a defect in the proposed method, namely empirical parameters used in the WSPE process. The next research will focus on overcoming the deficiency.
In addition, the appropriate programming process to implement the method proposed in this paper is necessary because this can save a lot of running time.

8 Conclusions

In conclusion, this paper proposes a visual attention feature-based method to extract the WSP from interference data background using clustering results. The conclusions include the followings.
(1)
As the proposed method in this paper gradually discerns the segments of the WSP, it can extract more information of the WSP, which contributes to seam tracking with higher accuracy.
 
(2)
The proposed visual attention feature-based method provides a reference for other schemes to WSPE when the image has been binarized.
 

Acknowledgements

The authors sincerely thank Professor Zhiwei Mao of Nanchang University for his critical discussion and reading during manuscript preparation.

Competing Interests

The authors declare no competing financial interests.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.
Literature
[1]
go back to reference Y M Zhang, R Kovacevic, L Li. Characterization and real-time measurement of geometrical appearance of the weld pool. International Journal of Machine Tools & Manufacture, 1996, 36(7): 799-816.CrossRef Y M Zhang, R Kovacevic, L Li. Characterization and real-time measurement of geometrical appearance of the weld pool. International Journal of Machine Tools & Manufacture, 1996, 36(7): 799-816.CrossRef
[2]
go back to reference N Lv, Y Xu, Z Zhang, et al. Audio sensing and modeling of arc dynamic characteristic during pulsed Al alloy GTAW process. Sensor Review, 2013, 33(2): 141-156.CrossRef N Lv, Y Xu, Z Zhang, et al. Audio sensing and modeling of arc dynamic characteristic during pulsed Al alloy GTAW process. Sensor Review, 2013, 33(2): 141-156.CrossRef
[3]
go back to reference Y Xu, G Fang, S Chen, et al. Real-time image processing for vision-based weld seam tracking in robotic GMAW. International Journal of Advanced Manufacturing Technology, 2014, 73(9-12): 1413-1425.CrossRef Y Xu, G Fang, S Chen, et al. Real-time image processing for vision-based weld seam tracking in robotic GMAW. International Journal of Advanced Manufacturing Technology, 2014, 73(9-12): 1413-1425.CrossRef
[4]
go back to reference Y He, H Chen, Y Huang, et al. Parameter self-optimizing clustering for autonomous extraction of the weld seam based on orientation saliency in robotic MAG welding. Journal of Intelligent & Robotic Systems, 2016, 83(2): 219-237.CrossRef Y He, H Chen, Y Huang, et al. Parameter self-optimizing clustering for autonomous extraction of the weld seam based on orientation saliency in robotic MAG welding. Journal of Intelligent & Robotic Systems, 2016, 83(2): 219-237.CrossRef
[5]
go back to reference S Y Liu, G R Wang, Z Hua, et al. Design of robot welding seam tracking system with structured light vision. Chinese Journal of Mechanical Engineering, 2010, 23(4): 436-442.CrossRef S Y Liu, G R Wang, Z Hua, et al. Design of robot welding seam tracking system with structured light vision. Chinese Journal of Mechanical Engineering, 2010, 23(4): 436-442.CrossRef
[6]
go back to reference J Muhammad, H Altun, E. Abo-Serie. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. The International Journal of Advanced Manufacturing Technology, 2018, 94(1): 13-29.CrossRef J Muhammad, H Altun, E. Abo-Serie. A robust butt welding seam finding technique for intelligent robotic welding system using active laser vision. The International Journal of Advanced Manufacturing Technology, 2018, 94(1): 13-29.CrossRef
[7]
go back to reference J Fan, F Jing, Y Lei, et al. A precise seam tracking method for narrow butt seams based on structured light vision sensor. Optics & Laser Technology, 2019, 109(1): 616-626.CrossRef J Fan, F Jing, Y Lei, et al. A precise seam tracking method for narrow butt seams based on structured light vision sensor. Optics & Laser Technology, 2019, 109(1): 616-626.CrossRef
[8]
go back to reference Y F Gong, X Z Dai, X D Li. Structured-light based joint recognition using bottom-up and top-down combined visual processing. 2010 International Conference on Image Analysis and Signal Processing (IASP), Zhejiang, China, April 9-11, 2010: 507-512. Y F Gong, X Z Dai, X D Li. Structured-light based joint recognition using bottom-up and top-down combined visual processing. 2010 International Conference on Image Analysis and Signal Processing (IASP), Zhejiang, China, April 9-11, 2010: 507-512.
[9]
go back to reference R Xiao, Y Xu, Z Hou, et al. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sensors and Actuators A: Physical, 2019, 297: 111533.CrossRef R Xiao, Y Xu, Z Hou, et al. An adaptive feature extraction algorithm for multiple typical seam tracking based on vision sensor in robotic arc welding. Sensors and Actuators A: Physical, 2019, 297: 111533.CrossRef
[10]
go back to reference S Liu, H Zhang, J Jia, et al. Feature recognition for underwater weld images. 29th Chinese Control Conference (CCC), 2010, Beijing, China, July 29-31, 2010: 2729-2734. (in Chinese) S Liu, H Zhang, J Jia, et al. Feature recognition for underwater weld images. 29th Chinese Control Conference (CCC), 2010, Beijing, China, July 29-31, 2010: 2729-2734. (in Chinese)
[11]
go back to reference J Du, W Xiong, W Chen, et al. Robust laser stripe extraction using ridge segmentation and region ranking for 3D reconstruction of reflective and uneven surface. Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, September 27-30, 2015: 4912-4916. J Du, W Xiong, W Chen, et al. Robust laser stripe extraction using ridge segmentation and region ranking for 3D reconstruction of reflective and uneven surface. Proceedings of the IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, September 27-30, 2015: 4912-4916.
[12]
go back to reference W P Gu, Z Y Xiong, W Wan. Autonomous seam acquisition and tracking system for multi-pass welding based on vision sensor. The International Journal of Advanced Manufacturing Technology, 2013, 69(1-4): 451-460.CrossRef W P Gu, Z Y Xiong, W Wan. Autonomous seam acquisition and tracking system for multi-pass welding based on vision sensor. The International Journal of Advanced Manufacturing Technology, 2013, 69(1-4): 451-460.CrossRef
[13]
go back to reference X Wang, R Bai, Z Liu. Weld seam detection and feature extraction based on laser vision. 33rd Chinese Control Conference (CCC), 2014, Nanjing, China, July 28-30, 2014: 8249-8252. (in Chinese) X Wang, R Bai, Z Liu. Weld seam detection and feature extraction based on laser vision. 33rd Chinese Control Conference (CCC), 2014, Nanjing, China, July 28-30, 2014: 8249-8252. (in Chinese)
[14]
go back to reference H-C Nguyen, B-R Lee. Laser-vision-based quality inspection system for small-bead laser welding. International Journal of Precision Engineering and Manufacturing, 2014, 15(3): 415-423.CrossRef H-C Nguyen, B-R Lee. Laser-vision-based quality inspection system for small-bead laser welding. International Journal of Precision Engineering and Manufacturing, 2014, 15(3): 415-423.CrossRef
[15]
go back to reference Y He, Z Yu, J Li, et al. Weld seam profile extraction using top-down visual attention and fault detection and diagnosis via EWMA for the stable robotic welding process. The International Journal of Advanced Manufacturing Technology, 2019, 104 (9): 3883-3897.CrossRef Y He, Z Yu, J Li, et al. Weld seam profile extraction using top-down visual attention and fault detection and diagnosis via EWMA for the stable robotic welding process. The International Journal of Advanced Manufacturing Technology, 2019, 104 (9): 3883-3897.CrossRef
[16]
go back to reference Q-Q Wu, J-P Lee, M-H Park, et al. A study on development of optimal noise filter algorithm for laser vision system in GMA welding. Procedia Engineering, 2014, 97: 819-827.CrossRef Q-Q Wu, J-P Lee, M-H Park, et al. A study on development of optimal noise filter algorithm for laser vision system in GMA welding. Procedia Engineering, 2014, 97: 819-827.CrossRef
[17]
go back to reference Z Lei, Z Mingyang, Z Lihua. Vision-based profile generation method of TWB for a new automatic laser welding line. 2007 IEEE International Conference on Automation and Logistics, Jinan, China, August 18-21, 2007: 1658-1663. Z Lei, Z Mingyang, Z Lihua. Vision-based profile generation method of TWB for a new automatic laser welding line. 2007 IEEE International Conference on Automation and Logistics, Jinan, China, August 18-21, 2007: 1658-1663.
[18]
go back to reference Y L Xie, L Zhang, C Y Wu, et al. A method of robotic visual tracking for a new automatic laser welding line. 2008 International Conference on Computer Science and Software Engineering, Hubei, China, December 12-14, 2008: 891-894. Y L Xie, L Zhang, C Y Wu, et al. A method of robotic visual tracking for a new automatic laser welding line. 2008 International Conference on Computer Science and Software Engineering, Hubei, China, December 12-14, 2008: 891-894.
[19]
go back to reference L Zhang, C Wu, Y Zou. An on-line visual seam tracking sensor system during laser beam welding. International Conference on Information Technology and Computer Science, Kiev, Ukraine, July 25-26, 2009: 361-364. L Zhang, C Wu, Y Zou. An on-line visual seam tracking sensor system during laser beam welding. International Conference on Information Technology and Computer Science, Kiev, Ukraine, July 25-26, 2009: 361-364.
[20]
go back to reference C Gu, Y Li, Q Wang, D Xu, et al. Robust features extraction for lap welding seam tracking system. 2009. YC-ICT ‘09. IEEE Youth Conference on Information, Computing and Telecommunication, Beijing, China, September 20-21, 2010: 319-322. C Gu, Y Li, Q Wang, D Xu, et al. Robust features extraction for lap welding seam tracking system. 2009. YC-ICT ‘09. IEEE Youth Conference on Information, Computing and Telecommunication, Beijing, China, September 20-21, 2010: 319-322.
[21]
go back to reference Y He, Y Chen, Y Xu, et al. Autonomous detection of weld seam profiles via a model of saliency-based visual attention for robotic arc welding. Journal of Intelligent & Robotic Systems, 2016, 81(3-4): 395-406.CrossRef Y He, Y Chen, Y Xu, et al. Autonomous detection of weld seam profiles via a model of saliency-based visual attention for robotic arc welding. Journal of Intelligent & Robotic Systems, 2016, 81(3-4): 395-406.CrossRef
[22]
go back to reference Y He, Y Xu, Y Chen, et al. Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robotics and Computer-Integrated Manufacturing, 2016, 37: 251-261.CrossRef Y He, Y Xu, Y Chen, et al. Weld seam profile detection and feature point extraction for multi-pass route planning based on visual attention model. Robotics and Computer-Integrated Manufacturing, 2016, 37: 251-261.CrossRef
[23]
go back to reference Y He, H Zhou, J Wang, et al. Weld seam profile extraction of T-joints based on orientation saliency for path planning and seam tracking. 2016 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Shanghai, China, July 8-10, 2016: 110-115. Y He, H Zhou, J Wang, et al. Weld seam profile extraction of T-joints based on orientation saliency for path planning and seam tracking. 2016 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Shanghai, China, July 8-10, 2016: 110-115.
[24]
go back to reference Y S He, Y X Chen, D Wu, et al. A detection framework for weld seam profiles based on visual saliency. In: Tarn TJ., Chen SB., Chen XQ. (eds). Robotic Welding, Intelligence and Automation. RWIA 2014. Advances in Intelligent Systems and Computing, 2015, 363: 311-319. Y S He, Y X Chen, D Wu, et al. A detection framework for weld seam profiles based on visual saliency. In: Tarn TJ., Chen SB., Chen XQ. (eds). Robotic Welding, Intelligence and Automation. RWIA 2014. Advances in Intelligent Systems and Computing, 2015, 363: 311-319.
Metadata
Title
Discerning Weld Seam Profiles from Strong Arc Background for the Robotic Automated Welding Process via Visual Attention Features
Authors
Yinshui He
Zhuohua Yu
Jian Li
Lesheng Yu
Guohong Ma
Publication date
01-12-2020
Publisher
Springer Singapore
Published in
Chinese Journal of Mechanical Engineering / Issue 1/2020
Print ISSN: 1000-9345
Electronic ISSN: 2192-8258
DOI
https://doi.org/10.1186/s10033-020-00438-2

Other articles of this Issue 1/2020

Chinese Journal of Mechanical Engineering 1/2020 Go to the issue

Premium Partners