Introduction
Problem Definition
Related Works
Contributions
Type | Category | Sub-category I | Sub-category II | Sub-category III | Description/analysis | Comments/drawbacks |
---|---|---|---|---|---|---|
Active (on-board devices) | With landing platform | Single color-based/black and white based on visible camera | Multiples markers | •Specific figures for recognition •Geometric figure relations •Single figure split into sub-figures | •Complex figures (circles/ellipses) very sensitive to image perspective distortion •When principal figures are missing, no starting point can be detected for identification | |
•Squares or circles with different sizes or radii •Main figure built with sub-figures (squares/circles) •Recognition based on the different sizes for each sub-figure | •Simple squares may be confused by natural items in outdoor environments (straw cubes cut by a mower) •Eccentricity depends on the perspective. Circles are affected by distortions, being perceived as ellipses | |||||
•Synthetic square markers (AprilTags, ArUco) •Computer vision-based techniques to identify black/white parts inside a wide black border and an inner binary matrix | •Sensitivity of squares to distortions •Although some methods replicate the codes to different sizes, distance remains a problem | |||||
•Recognition based on specific characteristic of regions in the marker (T-shaped, H-Shaped, X-shaped, or a single geometric figure like a circle, square, etc.) •Binarization, thresholding, segmentation, SURF, etc. are used for feature extraction with descriptors | •A unique figure requires a starting point or special spot as reference for correlations •Certain regions of interest cannot remain occluded or missing (usual in outdoor environments) •Distances between machine vision and figure must be considered for a correct framing | |||||
Multiple colors-based on visible light camera | Multiples figures | No figures restrictions (proposed cognitive method) | •Recognition with only half of the figures •Relationships between sub-figures allowing missing parts •Robust method to work under adverse conditions | •Sub-figures with light colors (Fig. 2) could disappear because of reflections •Training required to adjust colors (Table 2) | ||
Thermal camera | Main figure/single marker | •Recognition based on infrared radiation (temperature differences) between target (alphabetic character) and background | •Differences in temperature are required (perhaps with active heating) •A thermal camera is required | |||
Without landing platform (do not use a specific platform) | Visible light camera | Runway | Camera calibration is not required [8] | •The runway must be imaged and binarized •Sobel edge detector and the Hough transform are applied to identify the border line | •The width of the runway stripe and lateral offsets of the UAV with respect to the runway center line must be known in advance •At night or with poor light conditions, a lamp is required for guidance | |
Anywhere/not specific target [18] | •Landing areas are defined as smooth textures •Texture descriptors allow the identification (by learning) of textures areas for similarity with reference patterns | •Appropriate textured areas are required with sufficient sizes for landing •A textured database is required | ||||
•3D terrain reconstruction is applied based on a gray-scale matrix vision camera (mvBlueFOX) | •Flat terrain is required •High computational cost •Assumable 3D reconstruction errors are required | |||||
Passive (ground-fixed devices) | Fixed-wing/Quadrotor UAVs | Based on pan-tilt unit (PTU) | Stereo-vision | •The relative position between the target landing and the UAV is obtained based on the information provided by a PTU vision-system at different positions | •Stereo-based system, with infrared cameras, is limited in terms of distance determination with low accuracy •Fixed-wings UAVs require sufficient accuracies with touchdown points | |
Visible light cameras | Trinocular system | FireWire connection [15] | •Color (red, yellow, green, and blue) landmarks are used for orientation and position determination •CamShif algorithm is applied for detection and tracking the landmarks from the three images | •High precision calibration is required •Blue can be confused with the blue of the sky | ||
Fixed-wing UAV | NIR cameras | Based on array system | Laser lamp [11] | •A fixed-wind UAV (high flight speed) is equipped with a near infrared laser lamp •Camera array for detecting the near infrared laser | •Two NIR cameras (with a complex and precise setup process for accuracy) arranged on the ground are required •The cameras are placed on each side of the runway requiring a wide baseline | |
Based on pan-tilt unit (PTU) | Stereo-vision | •Two separate PTUs with two cameras (to avoid limitations on the cameras baseline) are placed on both sides of the runway •AdaBoost is used in [16] for tracking and 3D determination by triangulation •Three-dimensional localization is applied in [14] for target detection | •Designed for fixed-wing UAVs with specific set-up and hardware calibration •High computational complexity for 3D localization •Exhaustive AdaBoost training is required |
Marker (r) | a* (TIa) | b* (TIb) | Threshold (Tl) |
---|---|---|---|
LEB | 45.0 | −35.0 | 23.0 |
LE | − 16.0 | 33.5 | 23.6 |
REB | − 3.0 | 30.68 | 23.6 |
RE | 48.0 | 6.0 | 28.0 |
N | 15.0 | − 42.0 | 26.0 |
M | 22.6 | − 6 | 19.8 |
Proposed Cognitive Method
General
Landing Platform Design and Characteristics
Proposed Solution
Preprocessing
Feature Extraction
Marker (r) | Φ1 | Φ2 | Φ3 | Φ4 | Φ5 | Φ6 | Φ7 |
---|---|---|---|---|---|---|---|
LEB | − 1.2837 | − 3.7150 | 4.3979 | − 6.8821 | − 12.8082 | − 8.9511 | − 13.4887 |
LE | − 1.8081 | − 7.1000 | − 14.7351 | − 18.2484 | − 35.1690 | − 22.4804 | − 35.5544 |
REB | − 1.2838 | − 3.0417 | − 12.3622 | − 14.6235 | − 28.6950 | − 16.8482 | − 28.3050 |
RE | − 1.5137 | − 4.0205 | − 7.0933 | − 10.0107 | − 18.5817 | − 12.0440 | − 20.6703 |
N | − 1.4001 | − 3.3519 | − 15.0324 | − 16.3327 | − 32.0414 | − 18.1191 | − 33.5053 |
M | − 0.8704 | − 2.1921 | − 5.3414 | − 7.5967 | − 14.8185 | − 8.7103 | − 15.2624 |
Recognition
Single Marker Identification
Euclidean Distance Smart Geometric Analysis (EDSGA)
G1 = {LEB: 1, LE: 10, REB: 22, RE: 38, M: 45}G2 = {LEB: 1, LE: 10, REB: 22, RE: 39, M: 45}G3 = {LEB: 2, LE: 11, REB: 22, RE: 37, M: 45}G4 = {LEB: 2, LE: 10, REB: 23, RE: 37, M: 45}Normal font character indicate candidate regions matching GB, while bold characters refer to the candidate regions that are permuted as not belonging to GB but are labeled as specific markers.