Vision-based Assembly and Inspection System for Golf Club Heads

https://doi.org/10.1016/j.rcim.2014.10.004Get rights and content

Highlights

  • This study presents a Vision-based Assembly and Inspection System for Golf Club Heads (VAIS-GCH).

  • Ths system adopts automated mechanical arms matched with cameras for 3D space alignment.

  • Through the coordination of two cameras, the accuracy of the golf club head assembly is increased significantly.

Abstract

Golf club head production is started by creating metallic films and forming blank casts, which then undergo complex and sophisticated assembly procedures, morphological inspection and strength testing. Not only chemicals used in these processes often threaten the health of operators, but also most of this backend processing involves manual assembly and welding, demanding substantial amounts of manpower. In light of these pitfalls, this paper proposes a Vision-based Assembly and Inspection System for Golf Club Heads (VAIS-GCH) to mitigate the time-consuming and labor-intensive golf club head production process. Two cameras are coordinated to capture top-view image of the striking plate, and top- and side-views of the casting body for visual processing. After the barycenter shift and shifting angle of the striking plate are determined, a robot arm is then directed to suction up the striking plate from the conveyor belt, adjust it to the canonical orientation, and move the yet-to-be assembled striking plate to the top of the casting body. The 3D spatial positions of the casting body, namely, the XYZ shifting angles of the coupling opening, are detected by comparing the top- and side-view images captured with pre-stored 3D golf club head template to facilitate the coupling of the striking plate and casting body. The side-view camera monitors the insertion depth of the striking plate during the coupling process to make sure the surface of the striking plate levels with the casting body. The loft angle, formed by the line perpendicular to the tangent of the bottom of casting body and the tangent of the striking surface, is tested to confirm whether specifications are satisfied. Through the coordination of two cameras, the accuracy and efficiency of the golf club head assembly are increased significantly. The alignment error is smaller than 0.8 mm and the assembly and inspection process takes less than 2 s.

Introduction

Products in traditional 3D (Dangerous, Dirty and Difficult) industries typically consist of assembling multiple metal parts through screws, welding, and adhesives. A substantial amount of manual work and human-operated machinery are often indispensable in most production processes. Demands on the technical level of manpower are high; yet yields are only 60–80%, leading to high costs and unstable quality. This is especially true for the manufacturing of golf clubs. Targeting on various player age groups, the models of golf clubs are diverse in terms of flying direction and distance after ball striking. Yet the quantity manufactured for each specification tends to be small with relatively high unit profit and short product life cycle. Traditionally, manufacture of golf club heads are completed manually by performing fixture production, welding, and testing steps to ensure production quality. Following preliminary assembly, loft angles of golf club heads are examined by applying rulers. Human visual inspection is then conducted to verify whether products conform to customer specifications. This is a laborious process with low efficiency, and human errors are common. Golf club manufacturers routinely adjust the assembling procedures and inspection criteria in response to different specifications demanded by various models within a rather short product cycle. This practice inevitably leads to increasing human operating errors. Take the fabrication of a high-end titanium golf club head for example, the head is composed of a casting body, manufactured by die-casting, and a striking plate, finished by forge and EDM wire cut. The error regarding the outline of the striking plate fabricated is negligibly small due to the high precision of the wire cut machine employed. However, the casting body may be warped due to anisotropic shrinkage during cooling down. The distortion of casting body makes the pre-taught welding path no longer fit for the real shape of weld seam, resulting a drop of 8–10% in yielding rate. In light of the high cost of the titanium, the integration of robotic arm with vision means becomes imperative to circumvent this problem.

Robotic arms are often employed for automated processes in industrial applications. For example, spot welding is first performed by robots in automobile industry because the simple joint space interpolation involved. For seam weld, a smooth curve in Cartesian space should be generated from prescribed positions. This is fulfilled by a laborious teaching and complex coordinate transformations. An adequate robotic moving path should include velocity and acceleration information important to tasks such as seam welding, spray painting and assembly, etc. This requirement cannot be fulfilled simply by manual teaching. Offline programming through a CAD system is often resorted to alleviating the path planning problems [1], [2], [3], [4]. When the shape of a coupling workpiece is distorted, the robot should be able to dynamically adapt to the new outline of the weld seam. Recent development of machine vision technology is employed to inspect the welding surface, and to adjust the robot path during welding [5], [6], [7]. Features are identified after image capture and utilized to calculate corresponding 3D space coordinates and angles of rotation for fast, dynamic planning of smooth paths. This allows mechanical arms to reach the correct position accurately.

Robotic arms combined with machine vision were employed to construct an automatic golf club head welding system [8]. The image of golf club head that had undergone preliminary welding was processed by Sobel and Laplace filters for boundary separation. Mobile paths and shifting angles were then computed to guide the robotic arms to perform automatic welding operation. In this setup, the critical stages before and after preliminary welding, e.g., coupling of striking plate and casting body, and loft angle inspection, still demand human operation, nullifying a fully automatic golf club head manufacturing process. Although robotic arms were employed to expedite 3D object׳s grasp and movement, these applications did not demand space coupling and component coupling in high precision. The aforementioned technological transition is further hindered by the 3D space curved surface construction encountered in most metal parts, which are far more complex than simple planar objects. This causes difficulty in performing high-precision automated coupling, welding, and assembly. Therefore, the vision-based 3D correspondence and coupling for spatial objects has become a key issue in developing intelligent automatic equipment [9], [10], [11], [12], [13]. Also, a nondestructive inspection approach is required for the mass production of components for industrial applications, such as the automobile, shipbuilding, and aerospace industries where high inspection rate is required [14], [15], [16], [17], [18], [19].

This paper presents a Vision-based Assembly and Inspection System for Golf Club Heads (VAIS-GCH) to resolve these deficiencies. This system is equipped with two cameras, one mounted on a robot arm to capture top-view images of the striking plate and casting body, and the other a stationary camera positioned laterally to the casting body to provide the corresponding side view. Images captured at different stages of assembly are further processed for 3D alignment between the striking plate and casting body. The camera on the moving robotic arm captures the striking plate image, conveyed through an assembly line, from the top to derive the barycenter position and shifting angle of the striking plate. The robotic arm then suctions up the striking plate from the barycenter and compensates the angle shifted to a preset canonical orientation. After moving the robotic arm with the striking plate to the top of the casting body, both the camera on the robotic arm and the stationary one take a top-view and side-view image of the casting body, respectively. The top-view image of the casting body is utilized to calculate the barycenter of the casting body. The pre-stored 3D golf club head template is then compared to derive the XYZ-plane shifting angles of the casting body within a fixture structure. The robotic arm then inserts the striking plate into the opening of casting body based on the 3D spatial position derived. During the coupling process, the stationary camera monitors the insertion depth to make certain the top surface of the striking plate levels with that of the casting body. The side view after coupling of striking plate and casting body is then analyzed to determine whether the loft angle, formed by the line perpendicular to the tangent of the bottom of casting body and the tangent of the striking surface, meets the specification. This can greatly improve coupling precision between the striking plate and casting body, thereby increasing production and inspection efficiency to expedite fast transitions between different models and assembly lines.

Section snippets

Environmental settings

The VAIS-GCH system is composed of five main parts: a circular LED light source, two CCD cameras, a robotic arm, a casting body, and a fixture, as shown in Fig. 1. The striking plate is placed in arbitrary location and orientation on the assembly line, while the casting body is positioned within a fixture with possible 3D spatial variations. The circular LED and a CCD camera, with a resolution of 2592×1944, are mounted on the robotic arm. This translates to an image resolution of approximately

Experimental study

In order to test the reliability of the VAIS-GCH system developed, the calibration data, insertion depth and loft angle are tested for the striking plate suction, striking plate and casting body coupling, and the loft angle detection stages, respectively.

Conclusions and future work

VAIS-GCH provides spatial coupling and detection for 3D objects. Barycenter position and the shifting angle are used to suction up and correct the striking plate to the canonical orientation. Rotation in 3D space is performed according to the 3D shifting angles of the casting body. The barycenter of the striking plate is moved to alight with that of the casting body. After identifying the top surfaces of both striking plate and casting body, the appropriate coupling depth can be derived. The

Acknowledgments

The authors are grateful to the anonymous reviewers for their valuable suggestions.

References (27)

  • A.M. Andrew

    Another efficient algorithm for convex hulls in two dimensions

    Inf Proc Lett

    (1979)
  • Ouyang F, Zhang T, Ying C. Offline kinematics analysis and path planning of two-robot coordination in exhaust manifold...
  • Wang Z, Wang Y, Jia T. Applied research of laser robot welding system about Hastelloy pipe. In: Proceedings of the 9th...
  • Liu S, Wang G. Fast Calibration for robot welding system with laser vision. In: Proceedings of the IEEE international...
  • J.N. Pires et al.

    CAD interface for automatic robot welding programming

    Ind Robot

    (2004)
  • Matsui S, Goktug G. Slit laser sensor guided real-time seam tracking arc welding robot system for non-uniform joint...
  • Z. Guo et al.

    A method of initial welding position guiding for arc welding robot based on visual servo control

    China Weld

    (2003)
  • Chen CH, Huang HP, Lo SY. Stereo-based 3D localization for grasping known objects with a robotic arm system. In:...
  • X. Maldague et al.

    Dual-imager and its applications for active vision robot welding surface inspection, and two-color pyrometry

    Opt Test Metrol II SPIE Proc

    (1988)
  • Eitner C, Mori Y, Okada K, Inaba M. Task and vision based online manipulator trajectory generation for a humanoid...
  • Nakhaei A, Lamiraux F. Motion planning for humanoid robots in environments modeled by vision. In: Proceedings of the...
  • Gienger M, Toussaint M, Goerick C. Task maps in humanoid robot manipulation. In: Proceedings of the IEEE international...
  • Paulin M. Feature planning for robust execution of general robot tasks using visual servoing. In: Proceedings of the...
  • Cited by (12)

    • AR-based deep learning for real-time inspection of cable brackets in aircraft

      2023, Robotics and Computer-Integrated Manufacturing
    • SVM-based image partitioning for vision recognition of AGV guide paths under complex illumination conditions

      2020, Robotics and Computer-Integrated Manufacturing
      Citation Excerpt :

      As mentioned by Garibotto in [10], future developments should concern how to improve the adaptability of the vision system to illumination changes, since the frame grabber gain and offset parameters in image processing have to be adjusted in the light of variable contrast conditions. On the contrary, machine vision has been readily applied for on-machine manipulation and detection in industrial fields [24,25]. To some extent, this mainly depends on strategies of deliberately structuring the illumination circumstance for on-machine cameras, in order to obtain favourable contrast conditions for image segmentation [26–29].

    • Robotic assembly of smartphone back shells with eye-in-hand visual servoing

      2018, Robotics and Computer-Integrated Manufacturing
      Citation Excerpt :

      Calibration of such hand-eye systems can be performed automatically with a novel method that simultaneously calibrates the intrinsic parameters of a camera and the hand-eye-workspace relations using a line laser module [10]. For the purpose of performing tasks with equal dexterity and precision, eye-in-hand configurations can be adopted in a variety of industrial applications [11,12]. Real-time visual servoing can also be applied to perform active tracking [13], grasping [14,15], and assembly tasks [16].

    • High-Precision Monocular Vision Guided Robotic Assembly Based on Local Pose Invariance

      2023, IEEE Transactions on Instrumentation and Measurement
    View all citing articles on Scopus
    View full text