Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 19th Annual Conference on Towards Autonomous Robotics, TAROS 2018, held in Bristol, UK, in July 2018.
The 38 full papers presented together with 14 short papers were carefully reviewed and selected from 68 submissions. The papers focus on presentation and discussion of the latest results and methods in autonomous robotics research and applications. The conference offers a friendly environment for robotics researchers and industry to take stock and plan future progress.

Inhaltsverzeichnis

Frontmatter

Object Manipulation and Locomotion

Frontmatter

Trajectory Optimization for High-Power Robots with Motor Temperature Constraints

Modeling heat transfer is an important problem in high-power electrical robots as the increase of motor temperature leads to both lower energy efficiency and the risk of motor damage. Power consumption itself is a strong restriction in these robots especially for battery-powered robots such as those used in disaster-response. In this paper, we propose to reduce power consumption and temperature for robots with high-power DC actuators without cooling systems only through motion planning. We first propose a parametric thermal model for brushless DC motors which accounts for the relationship between internal and external temperature and motor thermal resistances. Then, we introduce temperature variables and a thermal model constraint on a trajectory optimization problem which allows for power consumption minimization or the enforcing of temperature bounds during motion planning. We show that the approach leads to qualitatively different motion compared to typical cost function choices, as well as energy consumption gains of up to 40%.

Wei Xin Tan, Martim Brandão, Kenji Hashimoto, Atsuo Takanishi

SPGS: A New Method for Autonomous 3D Reconstruction of Unknown Objects by an Industrial Robot

This paper presents the first findings of a new method called surface profile guided scan (SPGS) for 3D surface reconstruction of unknown small-scale objects. This method employs a laser profile sensor mounted on an industrial manipulator, a rotary stage, and a camera. The system requires no prior knowledge on the geometry of the object. The only information available is that the object is located on the rotary table, and is within the field of view of the camera and the working space of the industrial robot. First a number of surface profiles in the vertical direction around the object are generated from captured images. Then, a motion planning step is performed to position the laser sensor directed towards the profile normal. Finally, the 3D surface model is completed by hole detection and scanning process. The quality of surface models obtained from real objects with our system prove the effectiveness and the versatility of our 3D reconstruction method.

Cihan Uyanik, Sezgin Secil, Metin Ozkan, Helin Dutagaci, Kaya Turgut, Osman Parlaktuna

A Modified Computed Torque Control Approach for a Master-Slave Robot Manipulator System

A modified computed torque controller, adapted from the standard computed torque control law, is presented in this paper. The proposed approach is demonstrated on a 4-degree of freedom (DOF) master-slave robot manipulator and the modified computed torque controller gain parameters are optimized using both particle swarm optimization (PSO) and grey-wolf optimization algorithms. The feasibility of the proposed controller is tested experimentally and compared with its standard computed torque control counterpart. Controller tuning/optimization is carried out offline in the MATLAB/Simulink environment, and results show that the proposed controller is feasible, and performs impressively.

Ololade O. Obadina, Mohamed Thaha, Kaspar Althoefer, M. Hasan Shaheed

Data Synthesization for Classification in Autonomous Robotic Grasping System Using ‘Catalogue’-Style Images

The classification and grasping of randomly placed objects where only a limited number of training images are available, remains a challenging problem. Approaches such as data synthesis have been used to synthetically create larger training data sets from a small set of training data and can be used to improve performance. This paper examines how limited product images for ‘off the shelf’ items can be used to generate a synthetic data set that is used to train a that allows classification of the item, segmentation and grasping. Experiments investigating the effects of data synthesis are presented and the subsequent trained network implemented in a robotic system to perform grasping of objects.

Michael Cheah, Josie Hughes, Fumiya Iida

BounceBot: A One-Legged Jumping Robot

This paper describes the design and development of a jumping robot made from readily available components and 3D printed parts. This robot is designed to traverse obstacles that are too large for conventional locomotion methods, utilising elastic potential energy to store and release kinetic energy at differing rates. Rapidly releasing built up energy in this manner enables a small light-weight actuator to exceed its continuous torque output. This is used to accelerate the robot vertically and jump over an obstacle up to ten times its own height. Use of soft 3D printed materials allow for the robot to resist the impact caused by landing onto/jumping into obstacles. Due to its performance and availability/cost of its parts, this prototype provides a good platform for further research into this viable yet under-developed locomotion method. As the design is open source, researchers are free to use the details contained in this report along with the documentation available online. The concept can be used in a range of situations involving locomotion over uneven terrain. Potential projects include hazardous disaster site evaluation, planet exploration, and search and rescue.

James Rogers, Katherine Page-Bailey, Ryan Smith

Estimating Grasping Patterns from Images Using Finetuned Convolutional Neural Networks

Identification of suitable grasping pattern for numerous objects is a challenging computer vision task. It plays a vital role in robotics where a robotic hand is used to grasp different objects. Most of the work done in the area is based on 3D robotic grippers. An ample amount of work could also be found on humanoid robotic hands. However, there is negligible work on estimating grasping patterns from 2D images of various objects. In this paper, we propose a novel method to learn grasping patterns from images and data recorded from a dataglove, provided by the TUB Dataset. Our network retrains, a pre-trained deep Convolutional Neural Network (CNN) known as AlexNet, to learn deep features from images that correspond to human grasps. The results show that there are some interesting grasping patterns which are learned. In addition, we use two methods, Support Vector Machines (SVM) and hotelling’s T2 test to demonstrate that the dataset does include distinctive grasps for different objects. The results show promising grasping patterns that resembles actual human grasps.

Ashraf Zia, Bernard Tiddeman, Patricia Shaw

Soft and Bioinspired Robotics

Frontmatter

Easy Undressing with Soft Robotics

Dexterity impairments affect many people worldwide, limiting their ability to easily perform daily tasks and to be independent. Difficulty getting dressed and undressed is commonly reported. Some research has been performed on robot-assisted dressing, where an external device helps the user put on and take off clothes. However, no wearable robotic technology or robotic assistive clothing has yet been proposed that actively helps the user dress. In this article, we introduce the concept of Smart Adaptive Clothing, which uses Soft Robotic technology to assist the user in dressing and undressing. We discuss how Soft Robotic technologies can be applied to Smart Adaptive Clothing and present a proof of concept study of a Pneumatic Smart Adaptive Belt. The belt weighs only 68 g, can expand by up to 14% in less than 6 s, and is demonstrated aiding undressing on a mannequin, achieving an extremely low undressing time of 1.7 s.

Tim Helps, Majid Taghavi, Sarah Manns, Ailie J. Turton, Jonathan Rossiter

Biomimetic Knee Design to Improve Joint Torque and Life for Bipedal Robotics

This paper details the design, construction, and performance analysis of a biologically inspired knee joint for use in bipedal robotics. The design copies the condylar surfaces of the distal end of the femur and utilizes the same crossed four-bar linkage design the human knee uses. The joint includes a changing center of rotation, a screw-home mechanism, and patella; these are characteristics of the knee that are desirable to copy for bipedal robotics. The design was calculated to have an average sliding to rolling ratio of 0.079, a maximum moment arm of 2.7 in and a range of motion of 151°. This should reduce wear and perform similar to the human knee. Prototypes of the joint have been created to test these predicted properties.

Alexander G. Steele, Alexander Hunt, Appolinaire C. Etoundi

Evaluating the Radiation Tolerance of a Robotic Finger

In 2024, The Large Hadron Collider (LHC) at CERN will be upgraded to increase its luminosity by a factor of 10 (HL-LHC). The ATLAS inner detector (ITk) will be upgraded at the same time. It has suffered the most radiation damage, as it is the section closest to the beamline, and the particle collisions. Due to the risk of excessive radiation doses, human intervention to decommission the inner detector will be restricted. Robotic systems are being developed to carry out the decommissioning and limit radiation exposure to personnel. In this paper, we present a study of the radiation tolerance of a robotic finger assessed in the Birmingham Cyclotron facility. The finger was part of the Shadow Grasper from Shadow Robot Company, which uses a set of Maxon DC motors.

Richard French, Alice Cryer, Gabriel Kapellmann-Zafra, Hector Marin-Reyes

Soft Pneumatic Prosthetic Hand

Conventional prosthetic devices are heavy, expensive and rigid. They are complex, fragile and require sophisticated control strategies in order to deal with the grasping and manipulation tasks. In this paper we propose a new pneumatic soft prosthetic hand that is very simple to control due to its compliant structure and cheap in production. It is designed to be easily reshaped and resized to adapt easily to each individual user preferences. It is designed to be frequently changed whenever a child patient require a bigger size or whenever the old one is worn out or broken. Since it is soft and compliant it can be safely used even by small children without a risk of harmful mechanical interaction.

Jan Fras, Kaspar Althoefer

Path Planning and Autonomous Vehicles

Frontmatter

Tabu Temporal Difference Learning for Robot Path Planning in Uncertain Environments

This paper addresses the robot path planning problem in uncertain environments, where the robot has to avoid potential collisions with other agents or obstacles, as well as rectify actuation errors caused by environmental disturbances. This problem is motivated by many practical applications, such as ocean exploration by underwater vehicles, and package transportation in a warehouse by mobile robots. The novel feature of this paper is that we propose a Tabu methodology consisting of an Adaptive Action Selection Rule and a Tabu Action Elimination Strategy to improve the classic Temporal Difference (TD) learning approach. Furthermore, two classic TD learning algorithms (i.e., Q-learning and SASRA) are revised by the proposed Tabu methodology for optimizing learning performance. We use a simulated environment to evaluate the proposed algorithms. The results show that the proposed approach can provide an effective solution for generating collision-free and safety paths for robots in uncertain environments.

Changyun Wei, Fusheng Ni

Modelling and Predicting Rhythmic Flow Patterns in Dynamic Environments

We present a time-dependent probabilistic map able to model and predict flow patterns of people in indoor environments. The proposed representation models the likelihood of motion direction on a grid-based map by a set of harmonic functions, which efficiently capture long-term (minutes to weeks) variations of crowd movements over time. The evaluation, performed on data from two real environments, shows that the proposed model enables prediction of human movement patterns in the future. Potential applications include human-aware motion planning, improving the efficiency and safety of robot navigation.

Sergi Molina, Grzegorz Cielniak, Tomáš Krajník, Tom Duckett

Extending Deep Neural Network Trail Navigation for Unmanned Aerial Vehicle Operation Within the Forest Canopy

Autonomous flight within a forest canopy represents a key challenge for generalised scene understanding on-board a future Unmanned Aerial Vehicle (UAV) platforms. Here we present an approach for automatic trail navigation within such an unstructured environment that successfully generalises across differing image resolutions - allowing UAV with varying sensor payload capabilities to operate equally in such challenging environmental conditions. Specifically, this work presents an optimised deep neural network architecture, capable of state-of-the-art performance across varying resolution aerial UAV imagery, that improves forest trail detection for UAV guidance even when using significantly low resolution images that are representative of low-cost search and rescue capable UAV platforms.

Bruna G. Maciel-Pearson, Patrice Carbonneau, Toby P. Breckon

Virtual Environment for Training Autonomous Vehicles

Driver assistance and semi-autonomous features are regularly added to commercial vehicles with two key stakes: collecting data for training self-driving algorithms, and using these vehicles as testbeds for these algorithms. Due to the nature of algorithms used in autonomous vehicles, their behavior in unknown situation is not fully predictable. This calls for extensive testing. In this paper, we propose to use a virtual environment for both testing algorithms for autonomous vehicles and acquiring simulated data for their training. The benefit of this environment is to able to train algorithms with realistic simulated sensor data before their deployment in real life. To this end, the proposed virtual environment has the capacity to generate similar data than real sensors (e.g. cameras, LiDar, ...). After reviewing state-of-the-art techniques and datasets available for the automotive industry, we identify that dynamic data generated on-demand is needed to improve the current results in training autonomous vehicles. Our proposition describes the benefits a virtual environment brings in improving the development, quality and confidence in the algorithms.

Jerome Leudet, Tommi Mikkonen, François Christophe, Tomi Männistö

Comparing Model-Based and Data-Driven Controllers for an Autonomous Vehicle Task

The advent of autonomous vehicles comes with many questions from an ethical and technological point of view. The need for high performing controllers, which show transparency and predictability is crucial to generate trust in such systems. Popular data-driven, black box-like approaches such as deep learning and reinforcement learning are used more and more in robotics due to their ability to process large amounts of information, with outstanding performance, but raising concerns about their transparency and predictability. Model-based control approaches are still a reliable and predictable alternative, used extensively in industry but with restrictions of their own. Which of these approaches is preferable is difficult to assess as they are rarely directly compared with each other for the same task, especially for autonomous vehicles. Here we compare two popular approaches for control synthesis, model-based control i.e. Model Predictive Controller (MPC), and data-driven control i.e. Reinforcement Learning (RL) for a lane keeping task with speed limit for an autonomous vehicle; controllers were to take control after a human driver had departed lanes or gone above the speed limit. We report the differences between both control approaches from analysis, architecture, synthesis, tuning and deployment and compare performance, taking overall benefits and difficulties of each control approach into account.

Erwin Jose Lopez Pulgarin, Tugrul Irmak, Joel Variath Paul, Arisara Meekul, Guido Herrmann, Ute Leonards

An Improved Robot Path Planning Model Using Cellular Automata

Bio-inspired techniques have been successfully applied to the path-planning problem. Amongst those techniques, Cellular Automata (CA) have been seen a potential alternative due to its decentralized structure and low computational cost. In this work, an improved CA model is implemented and evaluated both in simulation and real environments using the e-puck robot. The objective was to construct a collision-free path plan from the robot initial position to the target position by applying the refined CA model and environment pre-processed images captured during its navigation. The simulations and real experiments show promising results on the model performance for a single robot.

Luiz G. A. Martins, Rafael da P. Cândido, Mauricio C. Escarpinati, Patricia A. Vargas, Gina M. B. de Oliveira

Robotics Vision and Teleoperation

Frontmatter

Colias IV: The Affordable Micro Robot Platform with Bio-inspired Vision

Vision is one of the most important sensing modalities for robots and has been realized on mostly large platforms. However for micro robots which are commonly utilized in swarm robotic studies, the visual ability is seldom applied or with reduced functions/resolution, due to the high demanding on the computation power. This research has proposed the low-cost micro ground robot Colias IV, which is particularly designed to meet the requirements to allow embedded vision based tasks on-board, such as bio-inspired collision detection neural networks. Numerous of successful approaches have demonstrated that the proposed micro robot Colias IV to be a feasible platform for introducing visual based algorithms into swarm robotics.

Cheng Hu, Qinbing Fu, Shigang Yue

ResQbot: A Mobile Rescue Robot with Immersive Teleperception for Casualty Extraction

In this work, we propose a novel mobile rescue robot equipped with an immersive stereoscopic teleperception and a teleoperation control. This robot is designed with the capability to perform safely a casualty-extraction procedure. We have built a proof-of-concept mobile rescue robot called ResQbot for the experimental platform. An approach called “loco-manipulation” is used to perform the casualty-extraction procedure using the platform. The performance of this robot is evaluated in terms of task accomplishment and safety by conducting a mock rescue experiment. We use a custom-made human-sized dummy that has been sensorised to be used as the casualty. In terms of safety, we observe several parameters during the experiment including impact force, acceleration, speed and displacement of the dummy’s head. We also compare the performance of the proposed immersive stereoscopic teleperception to conventional monocular teleperception. The results of the experiments show that the observed safety parameters are below key safety thresholds which could possibly lead to head or neck injuries. Moreover, the teleperception comparison results demonstrate an improvement in task-accomplishment performance when the operator is using the immersive teleperception.

Roni Permana Saputra, Petar Kormushev

Seeing the Unseen: Locating Objects from Reflections

Inspired by the ubiquitous use of reflections in human vision system, in this paper, we present our first step exploration of using reflections to extend the FOV of cameras in computer vision applications. We make use of a stereo camera, and establish mathematical models for locating objects from their mirror reflections. We also propose a pipeline to track and locate moving objects from their reflections captured in videos. Experimental results demonstrated the efficiency and effectiveness of the proposed method, and verified the potential use of reflections in locating non-line of sight (NLOS) objects.

Jing Wu, Ze Ji

The Effect of Pose on the Distribution of Edge Gradients in Omnidirectional Images

Images from omnidirectional cameras are used frequently in applications involving artificial intelligence and robotics as a source of rich information about the surroundings. A useful feature that can be extracted from these images is the distribution of gradients of the edges in the scene. This distribution is affected by the pose of the camera on-board a robot at any given location in the environment. This paper investigates the effect of the pose on this distribution. The gradients in the images are extracted and arranged into a histogram which is then compared to the histograms of other images using a chi-squared test. It is found that any differences in the distribution are not specific to either the position or orientation and that there is a significant difference in the distributions of two separate locations. This can aid in the localisation of robots when navigating.

Dean Jarvis, Theocharis Kyriacou

HRI, Assistive and Medical Robotics

Frontmatter

Learning to Listen to Your Ego-(motion): Metric Motion Estimation from Auditory Signals

This paper is about robot ego-motion estimation relying solely on acoustic sensing. By equipping a robot with microphones, we investigate the possibility of employing the noise generated by the motors and actuators of the vehicle to estimate its motion. Audio-based odometry is not affected by the scene’s appearance, lighting conditions, and structure. This makes sound a compelling auxiliary source of information for ego-motion modelling in environments where more traditional methods, such as those based on visual or laser odometry, are particularly challenged. By leveraging multi-task learning and deep architectures, we provide a regression framework able to estimate the linear and the angular velocity at which the robot has been travelling. Our experimental evaluation conducted on approximately two hours of data collected with an unmanned outdoor field robot demonstrated an absolute error lower than 0.07 m/s and 0.02 rad/s for the linear and angular velocity, respectively. When compared to a baseline approach, making use of single-task learning scheme, our system shows an improvement of up to 26% in the ego-motion estimation.

Letizia Marchegiani, Paul Newman

Piloting Scenarios for Children with Autism to Learn About Visual Perspective Taking

Visual Perspective Taking (VPT) is the ability to see the world from another person’s perspective, taking into account what they see and how they see it, drawing upon both spatial and social information. Children with autism often find it difficult to understand that other people might have perspectives, viewpoints, beliefs and knowledge that are different from their own which is a fundamental aspect VPT. In this paper, we present the piloting of scenarios for our first large scale pilot-study using a humanoid robot to assist children with autism develop their VPT skills. The games were implemented with the Kaspar robot and to our knowledge this is the first attempt to improve the VPT skills of children with autism through playing and interacting with a humanoid robot.

Luke Jai Wood, Ben Robins, Gabriella Lakatos, Dag Sverre Syrdal, Abolfazl Zaraki, Kerstin Dautenhahn

User Detection, Tracking and Recognition in Robot Assistive Care Scenarios

The field of assistive robotics is gaining traction in both research as well as industry communities. However, capabilities of existing robotic platforms still require improvements in order to implement meaningful human-robot interactions. We report on the design and implementation of an external system that significantly augments the person detection, tracking and identification capabilities of the Pepper robot. We perform a qualitative analysis of the improvements achieved by each system module under different interaction conditions and evaluate the whole system on hand of a scenario for elderly care assistance.

Ştefania Alexandra Ghiţă, Miruna-Ştafania Barbu, Alexandru Gavril, Mihai Trăscău, Alexandru Sorici, Adina Magda Florea

Hypertonic Saline Solution for Signal Transmission and Steering in MRI-Guided Intravascular Catheterisation

Use of traditional low-impedance sensor leads is highly undesirable in intravascular catheters to be used with MRI guidance; thermal safety and quality of imaging are particularly impacted by these components. In this paper, we are showing that hypertonic saline solution, a high-impedance body-like fluid, could be a compatible and effective signal transmission medium when used in MRI-compatible catheters. We also propose a simple type of catheter design that can be steered hydraulically using the same saline solution. Integration of hydraulic steering is not required for MRI-compatibility; however efficient design can bring advantages in terms of structural simplicity and miniaturisation. Manufacturing of proof-of-concept prototypes using 3D printing is underway.

Alberto Caenazzo, Kaspar Althoefer

A Conceptual Exoskeleton Shoulder Design for the Assistance of Upper Limb Movement

There is an increased interest on wearable technologies for rehabilitation and human augmentation. Systems focusing on the upper limbs are attempting to replicate the musculoskeletal structures found in humans, reproducing existing behaviors and capabilities. The current work is expanding on existing systems with a novel design that ensures the maximum range of motion while at the same time allowing for lockable features ensuring higher manipulation payloads at minimum energy and fatigue costs. An analysis of the biomechanics of the shoulder is being done and a detailed system design for structural as well actuation elements of a parallel mechanism is given. The benefits for the use are discussed of reduced weight, maximum range of motion at minimum energy cost.

Carlos Navarro Perez, Ioannis Georgilas, Appolinaire C. Etoundi, Jj Chong, Aghil Jafari

Swarm Robotics

Frontmatter

A Hormone Arbitration System for Energy Efficient Foraging in Robot Swarms

Keeping robots optimized for an environment can be computationally expensive, time consuming, and sometimes requires information unavailable to a robot swarm before it is assigned to a task. This paper proposes a hormone-inspired system to arbitrate the states of a foraging robot swarm. The goal of this system is to increase the energy efficiency of food collection by adapting the swarm to environmental factors during the task. These adaptations modify the amount of time the robots rest in a nest site and how likely they are to return to the nest site when avoiding an obstacle. These are both factors that previous studies have identified as having a significant effect on energy efficiency. This paper proposes that, when compared to an offline optimized system, there are a variety of environments in which the hormone system achieves an increased performance. This work shows that the use of a hormone arbitration system can extrapolate environmental features from stimuli and use these to adapt.

James Wilson, Jon Timmis, Andy Tyrrell

A Bio-inspired Aggregation with Robot Swarm Using Real and Simulated Mobile Robots

This paper presents an implementation of a bio-inspired aggregation scenario using swarm robots. The aggregation scenario took inspiration from honeybee’s thermotactic behaviour in finding an optimal zone in their comb. To realisation of the aggregation scenario, real and simulated robots with different population sizes were used. Mona, which is an open-source and open-hardware platform was deployed to play the honeybee’s role in this scenario. A model of Mona was also generated in Stage for simulation of aggregation scenario with large number of robots. The results of aggregation with real- and simulated-robots showed reliable aggregations and a population dependent swarm performance. Moreover, the results demonstrated a direct correlation between the results observed from the real robot and simulation experiments.

Sarika Ramroop, Farshad Arvin, Simon Watson, Joaquin Carrasco-Gomez, Barry Lennox

SO-MRS: A Multi-robot System Architecture Based on the SOA Paradigm and Ontology

A generic architecture for a class of distributed robotic systems is presented. The architecture supports openness and heterogeneity, i.e. heterogeneous components may be joined and removed from the systems without affecting its basic functionality. The architecture is based on the paradigm of Service Oriented Architecture (SOA), and a generic representation (ontology) of the environment. A device (e.g. robot) is seen as a collection of its capabilities exposed as services. Generic protocols for publishing, discovering, arranging services are proposed for creating composite services that can accomplish complex tasks in an automatic way. Also generic protocols for execution of composite services are proposed along with simple protocols for monitoring the executions, and for recovery from failures. The proposed architecture and generic protocols were implemented as a software platform, and tested for several multi-robot systems.

Kamil Skarzynski, Marcin Stepniak, Waldemar Bartyna, Stanislaw Ambroszkiewicz

Robotics Applications

Frontmatter

ROS Integration for Miniature Mobile Robots

In this paper, the feasibility of using the Robot Operating System (ROS) for controlling miniature size mobile robots was investigated. Open-source and low-cost robots employ limited processors, hence running ROS on such systems is very challenging. Therefore, we provide a compact, low-cost, and open-source module enabling miniature multi and swarm robotic systems of different sizes and types to be integrated with ROS. To investigate the feasibility of the proposed system, several experiments using a single robot and multi-robots were implemented and the results demonstrated the amenability of the system to be integrated in low-cost and open-source miniature size mobile robots.

Andrew West, Farshad Arvin, Horatio Martin, Simon Watson, Barry Lennox

Feature and Performance Comparison of the V-REP, Gazebo and ARGoS Robot Simulators

In this paper, the characteristics and performance of three open-source simulators for robotics, V-REP, Gazebo and ARGoS, are thoroughly analysed and compared. While they all allow for programming in C++, they also represent clear alternatives when it comes to the trade-off between complexity and performance. Attention is given to their built-in features, robot libraries, programming methods and the usability of their user interfaces. Benchmark test results are reported in order to identify how well the simulators can cope with environments of varying complexity. The richness of features of V-REP and the strong performance of Gazebo and ARGoS in complex scenes are highlighted. Various usability issues of Gazebo are also noted.

Lenka Pitonakova, Manuel Giuliani, Anthony Pipe, Alan Winfield

A Hybrid Underwater Acoustic and RF Localisation System for Enclosed Environments Using Sensor Fusion

Underwater localisation systems are traditionally based on acoustic range estimation, which lacks the accuracy to localise small underwater vehicles in enclosed structured environments for mapping and surveying purposes. The high attenuation of electromagnetic waves underwater can be exploited to obtain a more precise distance estimation over short distances. This work proposes a cooperative localisation system that combines an acoustic absolute localisation system with peer-to-peer distance estimation based on electromagnetic radio frequency (RF) attenuation between multiple robots. The proposed system is able to improve the position estimation of a group of Autonomous Underwater Vehicles (AUVs) or Remote Operated Vehicles (ROVs) for exploring enclosed environments.

Jose Espinosa, Mihalis Tsiakkas, Dehao Wu, Simon Watson, Joaquin Carrasco, Peter R. Green, Barry Lennox

Towards a Comprehensive Taxonomy for Characterizing Robots

Every day a new robot is developed with advanced characteristics and technical qualities. The increasingly rapid growth of robots and their characteristics demands bridging between the application requirements and the robot specifications. This process requires a supporting conceptual structure that can capture as many robot qualities as possible. Presenting robot characteristics through the proposed conceptual structure would enable designers to optimize robot capabilities against application requirements. It would also help application developers to select the most appropriate robot. Without a formal structure, an accurate linking between the robot domain and the application domain is not possible. This paper presents a novel theoretical representation that can capture robot features and capabilities and express them as descriptive dimensions to be used to develop a capability profile. The profile is intended to unify robot description and presentation. The proposed structure is reinforced with several layers, sections, categorizations and levels to allow a detailed explanation of robot characteristics. It is hoped that the proposed structure will influence the design, development, and testing of robots for specific applications. At the same time, it would help in highlighting the corresponding outlines in robot application requirements.

Manal Linjawi, Roger K. Moore

Implementation and Validation of Kalman Filter Based Sensor Fusion on the Zorro Mini-robot Platform

This paper focuses on the implementation of a Kalman filter for a sensor fusion task and the testing and validation of the implementation by using a test platform. Implementation device for the sensors and the fusion algorithm is the mini-robot platform Zorro that is equipped with multiple sensors. In order to internally develop a consistent model of the robot’s world sensor data has to be fused. The fused data is used to control the behavior of the robot that should be able to act autonomously. To test the sensor fusion and the resulting behavior a Teleworkbench test system has been developed that supports video recording and analysis of the robot’s behavior complemented by wireless transmission of robot’s internal sensor and state data. Both, the video data and the sensor data are matched and displayed at operator’s computer of the Teleworkbench system for detailed analysis.

Philipp Bolte, Joyce Martin, Reza Zandian, Ulf Witkowski

Evaluation of a Robot-Assisted Therapy for Children with Autism and Intellectual Disability

It is well established that robots can be suitable assistants in the care and treatment of children with Autism Spectrum Disorder (ASD). However, the majority of the research focuses on stand-alone interventions, high-functioning individuals and the success is evaluated via qualitative analysis of videos recorded during the interaction.In this paper, we present a preliminary evaluation of our on-going research on integrating robot-assisted therapy in the treatment of children with ASD and Intellectual Disability (ID), which is the most common case. The experiment described here integrates a robot-assisted imitation training in the standard treatment of six hospitalised children with various level of ID, who were engaged by a robot on imitative tasks and their progress assessed via a quantitative psycho-diagnostic tool. Results show success in the training and encourage the use of a robotic assistant in the care of children with ASD and ID with the exception of those with profound ID, who may need a different approach.

Daniela Conti, Grazia Trubia, Serafino Buono, Santo Di Nuovo, Alessandro Di Nuovo

Towards an Unmanned 3D Mapping System Using UWB Positioning

The work presented in this paper is part of a Horizon 2020 project known as DigiArt, with an aim to deploy an unmanned ground vehicle (UGV) mounted with a 3D scan LiDAR to generate 3D maps of an archaeological subterranean environment. The challenges faced when using 3D scan LiDAR is the ability to localize the LiDAR device and account for motion to register sequential point cloud frames. Traditionally approaches such as GPS and vision based systems are unsuitable for the intended environment due to signal restrictions and low lighting conditions respectively. Therefore, this paper seeks to assess an alternative method in the form of ultra-wideband (UWB) positioning system known as Pozyx. Experimental results show an average distance error of 4.8 cm, 10 cm, 6.5 cm and 8.3 cm for clear line of sight (CLOS) and 11 cm, 10 cm, 13.8 cm and 24 cm for non-clear line of sight (NCLOS) when the receiver is orientated at for 90$$^\circ $$∘, 60$$^\circ $$∘, 30$$^\circ $$∘ and 0$$^\circ $$∘ respectively.

Benjamin McLoughlin, Jeff Cullen, Andy Shaw, Frederic Bezombes

The Multimodal Speech and Visual Gesture (mSVG) Control Model for a Practical Patrol, Search, and Rescue Aerobot

This paper describes a model of the multimodal speech and visual gesture (mSVG) control for aerobots operating at higher nCA autonomy levels, within the context of a patrol, search, and rescue application. The developed mSVG control architecture, its mathematical navigation model, and some high level command operation models were discussed. This was successfully tested using both MATLAB simulation and python based ROS Gazebo UAV simulations. Some limitations were identified, which formed the basis for the further works presented.

Ayodeji O. Abioye, Stephen D. Prior, Glyn T. Thomas, Peter Saddington, Sarvapali D. Ramchurn

Camera-Based Force and Tactile Sensor

Tactile information has become a topic of great interest in the design of devices that explore the physical interaction with the external environment. For instance, it is important for a robot hand to perform manipulation tasks, such as grasping and active touching, using tactile sensors mounted on the finger pad to provide feedback information. In this research we present a novel device that obtains both force and tactile information in a single integrated elastomer. The proposed elastomer consists of two parts, one of which is transparent and is wrapped in another translucent one that has eight conical sensing elements underneath. Two parts are merged together via a mould. A CCD camera is mounted at the bottom of the device to record the images of two elastomer mediums illuminated by the LED arrays set inside of the device. The method consists of evaluating the state of the contact surface based on analysis of the image of two elastomers. The external deformation of the elastomer is used to measure three force components Fz, Mx and My. The measurement is based on the area changes of the conical sensing elements under different loads, while the image of the inner transparent elastomer captures the surface pattern, which can used to obtain tactile information.

Wanlin Li, Jelizaveta Konstantinova, Yohan Noh, Akram Alomainy, Kaspar Althoefer

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise