Skip to main content
Top

2016 | Book

Robot 2015: Second Iberian Robotics Conference

Advances in Robotics, Volume 2

Editors: Luís Paulo Reis, António Paulo Moreira, Pedro U. Lima, Luis Montano, Victor Muñoz-Martinez

Publisher: Springer International Publishing

Book Series : Advances in Intelligent Systems and Computing

insite
SEARCH

About this book

This book contains a selection of papers accepted for presentation and discussion at ROBOT 2015: Second Iberian Robotics Conference, held in Lisbon, Portugal, November 19th-21th, 2015. ROBOT 2015 is part of a series of conferences that are a joint organization of SPR – “Sociedade Portuguesa de Robótica/ Portuguese Society for Robotics”, SEIDROB – Sociedad Española para la Investigación y Desarrollo de la Robótica/ Spanish Society for Research and Development in Robotics and CEA-GTRob – Grupo Temático de Robótica/ Robotics Thematic Group. The conference organization had also the collaboration of several universities and research institutes, including: University of Minho, University of Porto, University of Lisbon, Polytechnic Institute of Porto, University of Aveiro, University of Zaragoza, University of Malaga, LIACC, INESC-TEC and LARSyS.

Robot 2015 was focussed on the Robotics scientific and technological activities in the Iberian Peninsula, although open to research and delegates from other countries. The conference featured 19 special sessions, plus a main/general robotics track. The special sessions were about: Agricultural Robotics and Field Automation; Autonomous Driving and Driver Assistance Systems; Communication Aware Robotics; Environmental Robotics; Social Robotics: Intelligent and Adaptable AAL Systems; Future Industrial Robotics Systems; Legged Locomotion Robots; Rehabilitation and Assistive Robotics; Robotic Applications in Art and Architecture; Surgical Robotics; Urban Robotics; Visual Perception for Autonomous Robots; Machine Learning in Robotics; Simulation and Competitions in Robotics; Educational Robotics; Visual Maps in Robotics; Control and Planning in Aerial Robotics, the XVI edition of the Workshop on Physical Agents and a Special Session on Technological Transfer and Innovation.

Table of Contents

Frontmatter

Environmental Robotics

Frontmatter
A UGV Approach to Measure the Ground Properties of Greenhouses

Greenhouse farming is based on the control of the environment of the crops and the supply of water and nutrients to the plants. These activities require the monitoring of the environmental variables at both global and local scale. This paper presents a ground robot platform for measuring the ground properties of the greenhouses. For this purpose, infrared temperature and soil moisture sensors are equipped into an unmanned ground vehicle (UGV). In addition, the navigation strategy is explained including the path planning and following approaches. Finally, all the systems are validated in a field experiment and maps of temperature and humidity are performed.

Alberto Ruiz-Larrea, Juan Jesús Roldán, Mario Garzón, Jaime del Cerro, Antonio Barrientos
An Aerial-Ground Robotic Team for Systematic Soil and Biota Sampling in Estuarine Mudflats

This paper presents an aerial-ground field robotic team, designed to collect and transport soil and biota samples in estuarine mudflats. The robotic system has been devised so that its sampling and storage capabilities are suited for radionuclides and heavy metals environmental monitoring. Automating these time-consuming and physically demanding tasks is expected to positively impact both their scope and frequency. The success of an environmental monitoring study heavily depends on the statistical significance and accuracy of the sampling procedures, which most often require frequent human intervention. The bird’s-eye view provided by the aerial vehicle aims at supporting remote mission specification and execution monitoring. This paper also proposes a preliminary experimental protocol tailored to exploit the capabilities offered by the robotic system. Preliminary field trials in real estuarine mudflats show the ability of the robotic system to successfully extract and transport soil samples for offline analysis.

Pedro Deusdado, Eduardo Pinto, Magno Guedes, Francisco Marques, Paulo Rodrigues, André Lourenço, Ricardo Mendonça, André Silva, Pedro Santana, José Corisco, Marta Almeida, Luís Portugal, Raquel Caldeira, José Barata, Luis Flores
Autonomous Seabed Inspection for Environmental Monitoring

We present an approach for navigating in unknown environments, while gathering information for inspecting underwater structures using an autonomous underwater vehicle (AUV). To accomplish this, we first use our framework for mapping and planning collision-free paths online, which endows an AUV with the capability to autonomously acquire optical data in close proximity. With that information, we then propose a reconstruction framework to create a 3-dimensional (3D) geo-referenced photo-mosaic of the inspected area. These 3D mosaics are also of particular interest to other fields of study in marine sciences, since they can serve as base maps for environmental monitoring, thus allowing change detection of biological communities and their environment in the temporal scale. Finally, we evaluate our frameworks, independently, using the SPARUS-II, a torpedo-shaped AUV, conducting missions in real-world scenarios. We also assess our approach in a virtual environment that emulates a natural underwater milieu that requires the aforementioned capabilities.

Juan David Hernández, Klemen Istenic, Nuno Gracias, Rafael García, Pere Ridao, Marc Carreras
Integrating Autonomous Aerial Scouting with Autonomous Ground Actuation to Reduce Chemical Pollution on Crop Soil

Many environmental problems cover large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial units, may help to address this issue. Due to their suitability to access and easily cover large areas, unmanned aerial units may be used to inspect the terrain and make a first assessment of the affected areas; however, these platforms do not currently have the capability to implement intervention.This paper proposes integrating autonomous aerial inspection with ground intervention to address environmental problems. Aerial units may be used to easily obtain relevant data about the environment, and ground units may use this information to perform the intervention more efficiently.Furthermore, an overall system to manage these combined missions, composed of aerial inspections and ground interventions performed by autonomous robots, is proposed and implemented.The approach was tested on an agricultural scenario, in which the weeds in a crop had to be killed by spraying herbicide on them. The scenario was addressed using a real mixed fleet composed of drones and tractors. The drones were used to inspect the field and to detect weeds and to provide the tractors the exact coordinates to only spray the weeds. This aerial and ground mission collaboration may save a large amount of herbicide and hence significantly reduce the environmental pollution and the treatment cost, considering the results of several research works that conclude that actual extensive crops are affected by less than a 40% of weed in the worst cases

Jesús Conesa-Muñoz, João Valente, Jaime del Cerro, Antonio Barrientos, Ángela Ribeiro

Future Industrial Robotics Systems

Frontmatter
Force-Sensorless Friction and Gravity Compensation for Robots

In this paper we present two controllers for robots that combine terms for the compensation of gravity forces, and the forces of friction of motors and gearboxes. The Low-Friction Zero-Gravity controller allows a guidance of the robot without effort, allowing small friction forces to reduce the free robot motion. It can serve to aid users providing kinesthetic demonstrations while programming by demonstration. In the present, kinesthetic demonstrations are usually aided by pure gravity compensators, and users must deal with friction. A Zero-Friction Zero-Gravity controller results in free movements, as if the robot were moving without friction or gravity influence. Ideally, only inertia drives the movements when zeroing the forces of friction and gravity. Coriolis and centrifugal forces are depreciated. The developed controllers have been tuned and tested for 1 DoF of a full-sized humanoid robot arm.

Santiago Morante, Juan G. Victores, Santiago Martínez, Carlos Balaguer
Commanding the Object Orientation Using Dexterous Manipulation

This paper presents an approach to change the orientation of a grasped object using dexterous manipulation teleoperated in a very simple way with the commands introduced by an operator using a keyboard. The novelty of the approach lays on a shared control scheme, where the robotic hand uses the tactile and kinematic information to manipulate an unknown object, while the operator decides the direction of rotation of the object without caring about the relation between his commands and the actual hand movements. Experiments were conducted to evaluate the proposed approach with different objects, varying the initial grasp configuration and sequence of actions commanded by the operator.

Andrés Montaño, Raúl Suárez
Validation of a Time Based Routing Algorithm Using a Realistic Automatic Warehouse Scenario

Traffic Control is one of the fundamental problems in the management of an Automated Guided Vehicle (AGV) system. Its main objectives are to assure efficient conflict free routes and to avoid/solve system deadlocks. In this sense, and as an extension of our previous work, this paper focus on exploring the capabilities of the Time Enhanced A* (TEA*) to dynamically control a fleet of AGVs, responsible for the execution of a predetermined set of tasks, considering an automatic warehouse case scenario. During the trial execution the proposed algorithm, besides having shown high capability on preventing/dealing with the occurrence of deadlocks, it also has exhibited high efficiency in the generation of free collision trajectories. Moreover, it was also selected an alternative from the state-of-art, in order to validate the TEA* results and compare it.

Joana Santos, Pedro Costa, Luís Rocha, Kelen Vivaldini, A. Paulo Moreira, Germano Veiga
Online Robot Teleoperation Using Human Hand Gestures: A Case Study for Assembly Operation

A solution for intuitive robot command and fast robot programming is presented to assemble pins in car doors. Static and dynamic gestures are used to instruct an industrial robot in the execution of the assembly task. An artificial neural network (ANN) was used in the recognition of twelve static gestures and a hidden Markov model (HMM) architecture was used in the recognition of ten dynamic gestures. Results of these two architectures are compared with results displayed by a third architecture based on support vector machine (SVM). Results show recognition rates of 96 % and 94 % for static and dynamic gestures when the ANN and HMM architectures are used, respectively. The SVM architecture presents better results achieving recognition rates of 97 % and 96 % for static and dynamic gestures, respectively.

Nuno Mendes, Pedro Neto, Mohammad Safeea, António Paulo Moreira
Generic Algorithm for Peg-In-Hole Assembly Tasks for Pin Alignments with Impedance Controlled Robots

In this paper, a generic algorithm for peg-in-hole assembly tasks is suggested. It is applied in the project GINKO were the aim is to connect electric vehicles with charging stations automatically. This paper explains an algorithm applicable for peg-in-hole tasks by means of Cartesian impedance controlled robots. The plugging task is a specialized peg-in-hole task for which 7 pins have to be aligned simultaneously and the peg and the hole have asymmetric shapes. In addition significant forces are required for complete insertion. The initial position is inaccurately estimated by a vision system. Hence, there are translational and rotational uncertainties between the plug, carried by the robot and the socket, situated on the E-car. To compensate these errors three different steps of Cartesian impedance control are performed. To verify our approach we evaluated the algorithm from many different start positions.

Michael Jokesch, Jozef Suchý, Alexander Winkler, André Fross, Ulrike Thomas
Double A* Path Planning for Industrial Manipulators

The scientific and technological development, together with the world of robotics, is constantly evolving, driven by the need to find new solutions and by the ambition of human beings to develop systems with increasingly efficiency. Consequently, it is necessary to develop planning algorithms capable of effectively and safely move a robot within a given non structured scene. Moreover, despite of the several robotic solutions available, there are still challenges to standardise a development technique able to obviate some pitfalls and limitations present in the robotic world. The Robotic Operative System (ROS) arise as the obvious solution in this regard. Throughout this project it was developed and implemented a double A* path planning methodology for automatic manipulators in the industrial environment. In this paper, it will be presented an approach with enough flexibility to be potentially applicable to different handling scenarios normally found in industrial environment.

Pedro Tavares, José Lima, Pedro Costa
Mobile Robot Localization Based on a Security Laser: An Industry Scene Implementation

Usually the Industrial Automatic Guide Vehicles (AGVs) have two kind of lasers. One for navigation on the top and others for obstacle detection (security lasers). Recently, security lasers extended its output data with obstacle distance (contours) and reflectivity, that allows the development of a novel localization system based on a security laser. This paper addresses a localization system that avoids a dedicated laser scanner reducing the implementations cost and robot size. Also, performs a tracking system with precision and robustness that can operate AVGs in an industrial environment. Artificial beacons detection algorithm combined with a Kalman filter and outliers rejection method increase the robustness and precision of the developed system. A comparison between the presented approach and a commercial localization system for industry is presented. Finally, the proposed algorithms were tested in an industrial application under realistic working conditions.

Héber Sobreira, A. Paulo Moreira, Paulo Gomes Costa, José Lima

Legged Locomotion Robots

Frontmatter
Energy Efficient MPC for Biped Semi-passive Locomotion

Traditional methods for robotic biped locomotion employing stiff actuation display low energy efficiency and high sensitivity to disturbances. In order to overcome these problems, a semi-passive approach based on the use of passive elements together with actuation has emerged, inspired by biological locomotion. However, the control strategy for such a compliant system must be robust and adaptable, while ensuring the success of the walking gait. In this paper, a Model Predictive Control (MPC) approach is applied to a simulated actuated Simplest Walker (SW), in order to achieve a stable gait while minimizing energy consumption. Robustness to slope change and to external disturbances are also studied.

C. Neves, R. Ventura
Monte-Carlo Workspace Calculation of a Serial-Parallel Biped Robot

This paper presents the Monte-Carlo calculation of the work-space of a biped redundant robot for climbing 3D structures. The robot has a hybrid serial-parallel architecture since each leg is composed of two parallel mechanisms connected in series. First, the workspace of the parallel mechanisms is characterized. Then, a Monte-Carlo algorithm is applied to compute the reachable workspace of the biped robot solving only the forward kinematics. This algorithm is modified to compute also the constant-orientation workspace. The algorithms have been implemented in a simulator that can be used to study the variation of the workspace when the geometric parameters of the robot are modified. The simulator is useful for designing the robot, as the examples show.

Adrián Peidró, Arturo Gil, José María Marín, Yerai Berenguer, Luis Payá, Oscar Reinoso
A Control Driven Model for Human Locomotion

This article concerns the modeling of human locomotion with a view to the design of advanced control systems that are capable of supporting natural mobility, and, thus, promoting inclusivity and quality of life.The complexity of the model (i.a. degrees of freedom and motion planes taken into consideration) was carefully chosen to include the relevant features of the motion dynamics while remaining as simple as possible. The outcome is a model composed by three components (stance leg, swing leg and trunk) that are articulated to achieve balanced motion patterns in both transitory and periodic contexts. Each leg has 3 links connected by pitch joints and the trunk has a single link.Significant attention was dedicated to the generation of natural (human-like) motion references, in order to achieve a safe and anthropomorphically correct motion that respects the human joints’ constraints and can be adjusted to the multiple daily-life situations.

Diana Guimarães, Fernando Lobo Pereira
Biped Walking Learning from Imitation Using Dynamic Movement Primitives

Exploring the full potential of humanoid robots requires their ability to learn, generalize and reproduce complex tasks that will be faced in dynamic environments. In recent years, significant attention has been devoted to recovering kinematic information from the human motion using a motion capture system. This paper demonstrates the use of a VICON system to capture human locomotion that is used to train a set of Dynamic Movement Primitives. These DMP can then be used to directly control a humanoid robot on the task space. The main objectives of this paper are: (1) to study the main characteristics of human natural locomotion and human “robot-like” locomotion; (2) to use the captured motion to train a DMP; (3) to use the DMP to directly control a humanoid robot in task space. Numerical simulations performed on V-REP demonstrate the effectiveness of the proposed solution.

José Rosado, Filipe Silva, Vítor Santos
Reconfiguration of a Climbing Robot in an All-Terrain Hexapod Robot

This work presents the reconfiguration from a previous climbing robot to an all-terrain robot for applications in outdoor environments. The original robot is a six-legged climbing robot for high payloads. This robot has used special electromagnetic feet in order to support itself on vertical ferromagnetic walls to carry out specific tasks. The reconfigured all-terrain hexapod robot will be able to perform different applications on the ground, for example, as inspection platform for humanitarian demining tasks. In this case, the reconfigured hexapod robot will load a scanning manipulator arm with a specific metal detector as end-effector. With the implementation of the scanning manipulator on the hexapod robot, several tasks about search and localisation of antipersonnel mines would be carried out. The robot legs have a SCARA configuration, which allows low energy consumption when the robot performs trajectories on a quasi-flat terrain.

Lisbeth Mena, Héctor Montes, Roemi Fernández, Javier Sarria, Manuel Armada
Review of Control Strategies for Lower Limb Prostheses

Each year thousands of people lose their lower limbs, mainly due to three causes: wars, accidents and vascular diseases. The development of prostheses is crucial to improve the quality of millions of people’s lives by restoring their mobility. Lower limb prostheses can be divided into three major groups: passive, semi-active or variable damping and powered or intelligent. This contribution provides a literature review of the principal control strategies used in lower limb prostheses, i.e., the controllers used in energetically powered transfemoral and transtibial prostheses. We present a comparison of the presented literature review and the future trends of this important field. It is concluded that the use of bio-inspired concepts and continuous control combined with the other control approaches can be crucial in the improvement of prosthesis controllers, enhancing the quality of amputee’s lives.

César Ferreira, Luis Paulo Reis, Cristina P. Santos

Machine Learning in Robotics

Frontmatter
Visual Inspection of Vessels by Means of a Micro-Aerial Vehicle: An Artificial Neural Network Approach for Corrosion Detection

Periodic visual inspection of the different surfaces of a vessel hull is typically performed by trained surveyors at great cost, both in time and in economical terms. Assisting them during the inspection process by means of mechanisms capable of automatic or semi-automatic defect detection would certainly decrease the inspection cost. This paper describes a defect detection approach comprising: (1) a Micro-Aerial Vehicle (MAV) which is used to collect images from the surfaces under inspection, particularly focusing on remote areas where the surveyor has no visual access; and (2) a coating breakdown/corrosion detector based on a 3-layer feed-forward artificial neural network. The success of the classification process depends not only on the defect detector but also on a number of assistance functions that are provided by the control architecture of the aerial platform, whose aim is to improve picture quality. Both aspects are described along the different sections of the paper, as well as the classification performance attained.

Alberto Ortiz, Francisco Bonnin-Pascual, Emilio Garcia-Fidalgo, Joan P. Company
Analyzing the Relevance of Features for a Social Navigation Task

Robot navigation in human environments is an active research area that poses serious challenges in both robot perception and actuation. Among them, social navigation and human-awareness have gained lot of attention in the last years due to its important role in human safety and robot acceptance. Several approaches have been proposed; learning by demonstrations stands as one of the most used approaches for estimating the insights of human social interactions. However, typically the features used to model the person-robot interaction are assumed to be given. It is very usual to consider general features like robot velocity, acceleration or distance to the persons, but there are not studies on the criteria used for such features selection.In this paper, we employ a supervised learning approach to analyze the most important features that might take part into the human-robot interaction during a robot social navigation task. To this end, different subsets of features are employed with an AdaBoost classifier and its classification accuracy is compared with that of humans in a social navigation experimental setup. The analysis shows how it is very important not only to consider the robot-person relative poses and velocities, but also to recognize the particular social situation.

Rafael Ramon-Vigo, Noe Perez-Higueras, Fernando Caballero, Luis Merino
Decision-Theoretic Planning with Person Trajectory Prediction for Social Navigation

Robots navigating in a social way should reason about people intentions when acting. For instance, in applications like robot guidance or meeting with a person, the robot has to consider the goals of the people. Intentions are inherently non-observable, and thus we propose Partially Observable Markov Decision Processes (POMDPs) as a decision-making tool for these applications. One of the issues with POMDPs is that the prediction models are usually handcrafted. In this paper, we use machine learning techniques to build prediction models from observations. A novel technique is employed to discover points of interest (goals) in the environment, and a variant of Growing Hidden Markov Models (GHMMs) is used to learn the transition probabilities of the POMDP. The approach is applied to an autonomous telepresence robot.

Ignacio Pérez-Hurtado, Jesús Capitán, Fernando Caballero, Luis Merino
Influence of Positive Instances on Multiple Instance Support Vector Machines

This work studies the influence of the percentage of positive instances on positive bags on the performance of multiple instance learning algorithms using support vector machines. There are several studies that compare the performance of different types of multiple instance learning algorithms in different datasets and the performance of these algorithms with the supervised learning counterparts. Nonetheless, none of them study the influence of having a low or high percentage of positive instances on the data that the classifiers are using to learn. Therefore, we have created a new image dataset with different percentages of positive instances from a dataset for pedestrian detection. Experimental results of the performance of mi-SVM and MI-SVM algorithms on an image annotation task are presented. The results show that higher percentages of positive instances increase the overall accuracy of classifiers based on the maximum bag margin formulation.

Nuno Barroso Monteiro, João Pedro Barreto, José Gaspar
A Data Mining Approach to Predict Falls in Humanoid Robot Locomotion

The inclusion of perceptual information in the operation of a dynamic robot (interacting with its environment) can provide valuable insight about its environment and increase robustness of its behaviour. In this regard, the concept of Associative Skill Memories (ASMs) has provided a great contributions regarding an effective and practical use of sensor data, under a simple and intuitive framework [2, 13]. Inspired by [2], this paper presents a data mining solution to the fall prediction problem in humanoid biped robotic locomotion. Sensor data from a large number of simulations was recorded and four data mining algorithms were applied with the aim of creating a classifier that properly identifies failure conditions. Using Support Vector Machines, on top of sensor data from a large number of simulation trials, it was possible to build an accurate and reliable offline fall predictor, achieving accuracy, sensitivity and specificity values up to 95.6%, 96.3% and 94.5%, respectively.

João André, Brígida Mónica Faria, Cristina Santos, Luís Paulo Reis

Rehabilitation and Assistive Robotics

Frontmatter
User Intention Driven Adaptive Gait Assistance Using a Wearable Exoskeleton

A user intention based rehabilitation strategy for a lower-limb wearable robot is proposed and evaluated. The control strategy, which involves monitoring the human-orthosis interaction torques, determines the gait initiation instant and modifies orthosis operation for gait assistance, when needed. Orthosis operation is classified as assistive or resistive in function of its evolution with respect to a normal gait pattern. The control algorithm relies on the adaptation of the joints’ stiffness in function of their interaction torques and their deviation from the desired trajectories. An average of recorded gaits obtained from healthy subjects is used as reference input. The objective of this work is to develop a control strategy that can trigger the gait initiation from the user’s intention and maintain the dynamic stability, using an efficient real-time stiffness adaptation for multiple joints, simultaneously maintaining their synchronization. The algorithm has been tested with five healthy subjects showing its efficient behavior in initiating the gait and maintaining the equilibrium while walking in presence of external forces. The work is performed as a preliminary study to assist patients suffering from incomplete Spinal cord injury and Stroke.

Vijaykumar Rajasekaran, Joan Aranda, Alicia Casals
Control of the E2REBOT Platform for Upper Limb Rehabilitation in Patients with Neuromotor Impairment

In this paper, the most significant aspects of the new robotic platform E2REBOT, for active assistance in rehabilitation work of the upper limbs for people with neuromotor impairment, are presented. Special emphasis is made on the characteristics of their control architecture, designed based on a three level model, one of which implements a haptic impedance controller, developed according to the “assist as needed” paradigm, looking to dynamically adjust the level of assistance to the current situation of the patient, in order to improve the results of the therapy. The two modes of therapy that supports the platform are described, highlighting the behavior of the control system in each case and describing the criteria used to adapt the behavior of the robot. Finally, we describe the ability of the system for the automatic recording of kinematic and dynamic parameters during the execution of therapies, and the availability of a management environment for exploiting these data, as a tool for supporting the rehabilitation tasks.

Juan-Carlos Fraile, Javier Pérez-Turiel, Pablo Viñas, Rubén Alonso, Alejandro Cuadrado, Laureano Ayuso, Francisco García-Bravo, Felix Nieto, Laurentiu Mihai, Manuel Franco-Martin
Design and Development of a Pneumatic Robot for Neurorehabilitation Therapies

This paper presents a new robotic system for upper limb rehabilitation. It is designed to assist the upper limb in therapies for both sitting and supine position, helping patients to carry out the required movements when they could not perform them. In the first part of the paper, the mechanical design and the development of the first prototype is exposed in detail. In the second part, new control strategy that modify the behavior of the rehabilitation robot according to different potential and force fields has been presented. Then, some experimental results of the performance of the implemented control with healthy subjects are reported.

Jorge A. Díez, Francisco J. Badesa, Luis D. Lledó, José M. Sabater, Nicolás García-Aracil, Isabel Beltrán, Ángela Bernabeu
An Active Knee Orthosis for the Physical Therapy of Neurological Disorders

This paper presents the design of a new robotic orthotic solution aimed at improving the rehabilitation of a number of neurological disorders (Multiple Sclerosis, Post-Polio and Stroke). These neurological disorders are the most expensive for the European Health Systems, and the personalization of the therapy will contribute to a 47% cost reduction. Most orthotic devices have been evaluated as an aid to in-hospital training and rehabilitation in patients with motor disorders of various origins. The advancement of technology opens the possibility of new active orthoses able to improve function in the usual environment of the patient, providing added benefits to state-of-the-art devices in life quality. The active knee orthosis aims to serve as a basis to justify the prescription and adaptation of robotic orthoses in patients with impaired gait resulting from neurological processes.

Elena Garcia, Daniel Sanz-Merodio, Manuel Cestari, Manuel Perez, Juan Sancho

Robotic Applications in Art and Architecture

Frontmatter
LSA Portraiture Robot

This paper describes the development of an application that allows an ABB robot arm to automatically perform the portrait of people. The Portraiture Robot performs the picture of a human face on paper. The developed system consists of 4 steps: (i) image acquisition through a webcam, (ii) image processing to retrieve the contours and features of the person’s face, (iii) vectorization of the coordinates in the image plane, and (iv) conversion of the coordinates to the RAPID programming language. To get only the person’s face, is performed a background subtraction and to obtain only the necessary information from the image are used filtering techniques to remove the features and contours of the person’s face. To convert these points into x, y coordinates, the contours are vectorised and sent to a file, saved according to a defined protocol, and allowing to create a program for the robot. The developed application allows processing of all blocks listed above in real-time and in a robust manner, having the ability to adapt to any environment and allowing continued use. The work was validated through the participation in the 2014 Portuguese Robotics Open, and in an ISEP exhibition that occurred in Maia, always with good results.

Bruno Rodrigues, Eduardo Cruz, André Dias, Manuel F. Silva
Human Interaction-Oriented Robotic Form Generation
Reimagining Architectural Robotics Through the Lens of Human Experience

Within the discipline of architecture, the exploration and integration of robotics has recently become an area of rapid development and investment. But with the current majority of architectural robotics research focused primarily around the realms of digital fabrication and biologic form/material optimization, there are few examples of direct translation from human generated data to form and processes, particularly as it pertains to the human experience of, and the interaction with architectural artifacts. Through a series of three case studies each building upon the previous, this paper investigates how the interconnection of secondary, smaller data harvesting/translating robotic systems in collaboration with larger industrial systems can be integrated within the conceptual design workflow to allow for the creation of unique/interactive tools for the materialization of human interaction through design, robotic control, and fabrication.

Andrew Wit, Daniel Eisinger, Steven Putt
Robot-Aided Interactive Design for Wind Tunnel Experiments

The objective of this study is to investigate the effect of architectural geometry and materiality on airflow around buildings. For this purpose it is relevant to look for interactive design and analysis platforms that enable the analysis of architectural form and material variations while promoting the participation of designers in the analysis process. Today wind tunnel experiments are mostly deployed for design post-rationalization purposes, complicating the interaction between designers and the experimental environment, and constraining the number of design tests to be performed. The following research proposes to collapse the modeling and sensing processes within the wind tunnel with the aid of a robotic arm, to enable a real time design feedback informed by airflow analysis. Building geometry and surface studies have been conducted aided by robotic modeling and sensing, in a low speed and turbulence open circuit wind tunnel for a single building array and street canyon configuration. The recorded velocity profile variations reveal that mean flow statistics are sensitive to the texture variations.

Maider Llaguno Munitxa

Simulation and Competitions in Robotics

Frontmatter
A Coordinated Team of Agents to Solve Mazes

Mazes have been famously chosen as a great challenge for robots, either real or virtual, to solve, where agents have to explore the maze and fulfil goals. Mazes can be explored with greater speed by using a group of agents, as opposed to a single-agent system. There is, however, a greater degree of complexity in the implementation of a distributed team of agents that can coordinate to complete their tasks faster and more efficiently.This paper explores the CiberMouse competition problem, where a team of virtual agents need to complete tasks within an unknown maze, with as much efficiency as possible. Their solution has shown great results in the challenge and has won the CiberMouse 2015 competition. The team can solve many complex mazes, in a smart and mostly collision-free manner. Our agents struggle with very tight paths, but compensate by having flexible high-level behaviours which allow them an efficient maze exploration.

David Simões, Rui Brás, Nuno Lau, Artur Pereira

Social Robotics: Intelligent and Adaptable AAL Systems

Frontmatter
RFID-Based People Detection for Human-Robot Interaction

This paper discusses the use of off-the-shelf Radio Frequency Identification (RFID) detection as complementary technology to the localization of people (detection and localization relative to the robot) in social robotics scenarios. A novel model for the detection of passive RFID tags is proposed, involving the estimation in real time of a measure of the probability of the tag being detected. The method estimates the location of the tag relative to the reader with an accuracy suitable for a wide range of human-robot interactions and social robotics applications.

Duarte Lopes Gameiro, João Silva Sequeira
Gaze Tracing in a Bounded Log-Spherical Space for Artificial Attention Systems

Human gaze is one of the most important cue for social robotics due to its embedded intention information. Discovering the location or the object that an interlocutor is staring at, gives the machine some insight to perform the correct attentional behaviour. This work presents a fast voxel traversal algorithm for estimating the potential locations that a human is gazing. Given a 3D occupancy map in log-spherical coordinates and the gaze vector, we evaluate the regions that are relevant for attention by computing the set of intersected voxels between an arbitrary gaze ray in the 3D space and a log-spherical bounded section defined by $$\rho \in (\rho _{min},\rho _{max}),\theta \in (\theta _{min},\theta _{max} ),\phi \in (\phi _{min},\phi _{max})$$. The first intersected voxel is computed in closed form and the rest are obtained by binary search guaranteeing no repetitions in the intersected set. The proposed method is motivated and validated within a human-robot interaction application: gaze tracing for artificial attention systems.

Beatriz Oliveira, Pablo Lanillos, João Filipe Ferreira

Surgical Robotics

Frontmatter
Design of a Realistic Robotic Head Based on Action Coding System

In this paper, the development of a robotic head able to move and show different emotions is addressed. The movement and emotion generation system has been designed following the human facial musculature. Starting from the Facial Action Coding System (FACS), we have built a 26 actions units model that is able to produce the most relevant movements and emotions of a real human head. The whole work has been carried out in two steps. In the first step, a mechanical skeleton has been designed and built, in which the different actuators have been inserted. In the second step, a two-layered silicon skin has been manufactured, on which the different actuators have been inserted following the real muscle-insertions, for performing the different movements and gestures. The developed head has been integrated in a high level behavioural architecture, and pilot experiments with 10 users regarding emotion recognition and mimicking have been carried out.

Samuel Marcos, Roberto Pinillos, Jaime Gómez García-Bermejo, Eduardo Zalama
A Comparison of Robot Interaction with Tactile Gaming Console Stimulation in Clinical Applications

Technological advancements in recent years have encouraged lots of research focus on robot interaction among individuals with intellectual disability, especially among kids with Autism Spectrum Disorders (ASD). However, promising advancements shown by these investigations, about use of interactive robots for rehabilitation of such individuals can be questioned on various aspects, e.g. is effectiveness of interaction therapy because of the robot itself or due to the sensory stimulations? Only few studies have shown any significant comparison in remedial therapy using interactive robots with non-robotic visual stimulations. In proposed research, authors have tried to explore this idea by comparing response of robotic interactions with stimulations caused by a tactile gaming console, among individuals with profound and multiple learning disability (PMLD). The results show that robot interactions are more effective but stimulations caused by tactile gaming consoles can significantly serve as complementary tool for therapeutic benefit of patients.

Jainendra Shukla, Julián Cristiano, Laia Anguera, Jaume Vergés-Llahí, Domènec Puig

Urban Robotics

Frontmatter
Real-time Application for Monitoring Human Daily Activity and Risk Situations in Robot-Assisted Living

In this work, we present a real-time application in the scope of human daily activity recognition for robot-assisted living as an extension of our previous work [1]. We implemented our approach using Robot Operating System (ROS) environment, combining different modules to enable a robot to perceive the environment using different sensor modalities. Thus, the robot can move around, detect, track and follow a person to monitor daily activities wherever the person is. We focus our attention mainly on the robotic application by integrating several ROS modules for navigation, activity recognition and decision making. Reported results show that our framework accurately recognizes human activities in a real time application, triggering proper robot (re)actions, including spoken feedback for warnings and/or appropriate robot navigation tasks. Results evidence the potential of our approach for robot-assisted living applications.

Mário Vieira, Diego R. Faria, Urbano Nunes
Challenges in the Design of Laparoscopic Tools

The need to minimize trauma in surgical interventions has led to a continuous evolution of surgical techniques. The robotization of minimally invasive surgeries (MIS) through robotized instruments, provided with 2 or 3 degrees of freedom, aims to increase dexterity, accuracy… and thus, assist the surgeons. This work presents the challenges faced during the development of a surgical instrument, from the work carried out in the design and implementation of a complete surgical robotic system. After an overview of the surgical instruments associated to the alternative techniques in MIS, the process of designing laparoscopic instruments for the developed robotic system is described. Our approach focusses on the technological challenges of achieving a user-friendly laparoscopy, affordable for hospitals. These include the complexity of designing small-sized tools, which match the surgical requirements and introduce additional features, as haptic feedback. In addition, we explain the non-technological obstacles overcome to satisfy the commercialization requirements. The huge number of patents in this field acts as a spider web, which led us to seek for novelty. Although specific parts of the robotic system were not the core of our project, we needed to fit their design and obtain our own patents to grant the complete robotic system was free of patent infringement. On the other hand, complex regulatory procedures turn the whole commercialization process dilated and tedious. Finally, we present some of our ongoing research to improve performance of this kind of robot assisted surgery, as well as to other surgical fields.

J. Amat, A. Casals, E. Bergés, A. Avilés

Visual Maps in Robotics

Frontmatter
Ontologies Applied to Surgical Robotics

The paper presents current efforts and methods presented by the research community to represent knowledge to be used, in a machine readable format, for surgical robotics. Ontologies from the medical field are surveyed, to be aligned with robotic ontologies to obtain proper surgical robotic ontologies. The later, are valuable tools that combine surgical protocols, machine protocols, anatomical ontologies, and medical image data. An orthopaedic robot surgical ontology, for knowledge representation, is presented and briefly discussed. The system based on ontologies uses dedicated algorithms, devices and merges existing medical and robotic ontologies to obtain a common ontology framework.

P. J. S. Gonçalves
Low Cost, Robust and Real Time System for Detecting and Tracking Moving Objects to Automate Cargo Handling in Port Terminals

The presented paper addresses the problem of detecting and tracking moving objects for autonomous cargo handling in port terminals using a perception system which input data is a single layer laser scanner. A computationally low cost and robust Detection and Tracking Moving Objects (DATMO) algorithm is presented to be used in autonomous guided vehicles and autonomous trucks for efficient transportation of cargo in ports. The method first detects moving objects and then tracks them, taking into account that in port terminals the structure of the environment is formed by containers and that the moving objects can be trucks, AGV, cars, straddle carriers and people among others. Two approaches of the DATMO system have been tested, the first one is oriented to detect moving obstacles and focused on tracking and filtering those detections; and the second one is focused on keepking targets when no detections are provided. The system has been evaluated with real data obtained in the CTT port terminal in Hengelo, the Netherlands. Both methods have been tested in the dataset with good results in tracking moving objects.

Victor Vaquero, Ely Repiso, Alberto Sanfeliu, John Vissers, Maurice Kwakkernaat
Observation Functions in an Information Theoretic Approach for Scheduling Pan-Tilt-Zoom Cameras in Multi-target Tracking Applications

The vast streams of data created by camera networks render unfeasible browsing all data, relying only on human resources. Automation is required for detecting and tracking multiple targets by using multiple cooperating cameras. In order to effectively track multiple targets, autonomous active camera networks require adequate scheduling and control methodologies. Scheduling algorithms assign visual targets to cameras. Control methodologies set precise orientation and zoom references of the cameras. We take an approach based on information theory to solve the scheduling and control problems. Each observable target in the environment corresponds to a source of information for which an observation corresponds to a reduction of the uncertainty and, as such, a gain in the information. In this work we focus on the effect of observation functions within the information gain. Observation functions are shown to help avoiding extreme zoom levels while keeping smooth information gains.

Tiago Marques, Luka Lukic, José Gaspar
Nearest Position Estimation Using Omnidirectional Images and Global Appearance Descriptors

This work presents an algorithm to estimate the position and orientation of a mobile robot using only the visual information provided by a catadioptric system mounted on the robot. Each omnidirectional scene is described with a single global appearance descriptor. We have developed a description method which is based on the Radon transform. Our localization method compares the visual information captured by the robot from an unknown position with the visual information stored in a previously built map. As a result it estimates the nearest position of this map and the orientation of the robot. We have tested all the algorithms with a virtual database we have built. This database is composed of a set of omnidirectional images captured from different points of an indoor virtual environment. The experiments have allowed us to tune the main parameters and the results show the effectiveness and the robustness of our method.

Yerai Berenguer, Luis Payá, Adrián Peidró, Arturo Gil, Oscar Reinoso

Visual Perception for Autonomous Robots

Frontmatter
Accurate Map-Based RGB-D SLAM for Mobile Robots

In this paper we present and evaluate a map-based RGB-D SLAM (Simultaneous Localization and Mapping) system employing a novel idea of combining efficient visual odometry and a persistent map of 3D point features used to jointly optimize the sensor (robot) poses and the feature positions. The optimization problem is represented as a factor graph. The SLAM system consists of a front-end that tracks the sensor frame-by-frame, extracts point features, and associates them with the map, and a back-end that manages and optimizes the map. We propose a robust approach to data association, which combines efficient selection of candidate features from the map, matching of visual descriptors guided by the sensor pose prediction from visual odometry, and verification of the associations in both the image plane and 3D space. The improved accuracy and robustness is demonstrated on publicly available data sets.

Dominik Belter, Michał Nowicki, Piotr Skrzypczyński
Onboard Robust Person Detection and Tracking for Domestic Service Robots

Domestic assistance for the elderly and impaired people is one of the biggest upcoming challenges of our society. Consequently, in-home care through domestic service robots is identified as one of the most important application area of robotics research. Assistive tasks may range from visitor reception at the door to catering for owner’s small daily necessities within a house. Since most of these tasks require the robot to interact directly with humans, a predominant robot functionality is to detect and track humans in real time: either the owner of the robot or visitors at home or both. In this article we present a robust method for such a functionality that combines depth-based segmentation and visual detection. The robustness of our method lies in its capability to not only identify partially occluded humans (e.g., with only torso visible) but also to do so in varying lighting conditions. We thoroughly validate our method through extensive experiments on real robot datasets and comparisons with the ground truth. The datasets were collected on a home-like environment set up within the context of RoboCup@Home and RoCKIn@Home competitions.

David Sanz, Aamir Ahmad, Pedro Lima
Visual-Inertial Based Autonomous Navigation

This paper presents an autonomous navigation and position estimation framework which enables an Unmanned Aerial Vehicle (UAV) to possess the ability to safely navigate in indoor environments. This system uses both the on-board Inertial Measurement Unit (IMU) and the front camera of a AR.Drone platform and a laptop computer were all the data is processed. The system is composed of the following modules: navigation, door detection and position estimation. For the navigation part, the system relies on the detection of the vanishing point using the Hough transform for wall detection and avoidance. The door detection part relies not only on the detection of the contours but also on the recesses of each door using the latter as the main detector and the former as an additional validation for a higher precision. For the position estimation part, the system relies on pre-coded information of the floor in which the drone is navigating, and the velocity of the drone provided by its IMU. Several flight experiments show that the drone is able to safely navigate in corridors while detecting evident doors and estimate its position. The developed navigation and door detection methods are reliable and enable an UAV to fly without the need of human intervention.

Francisco de Babo Martins, Luis F. Teixeira, Rui Nóbrega
Ball Detection for Robotic Soccer: A Real-Time RGB-D Approach

The robotic football competition has encouraged the participants to develop new ways of solving different problems in order to succeed in the competition. This article shows a different approach to the ball detection and recognition by the robot using a Kinect System. It has enhanced the capabilities of the depth camera in detecting and recognizing the ball during the football match. This is important because it is possible to avoid the noise that the RGB cameras are subject to for example lighting issues.

André Morais, Pedro Costa, José Lima
Real Time People Detection Combining Appearance and Depth Image Spaces Using Boosted Random Ferns

This paper presents a robust and real-time method for people detection in urban and crowed environments. Unlike other conventional methods which either focus on single features or compute multiple and independent classifiers specialized in a particular feature space, the proposed approach creates a synergic combination of appearance and depth cues in a unique classifier. The core of our method is a Boosted Random Ferns classifier that selects automatically the most discriminative local binary features for both the appearance and depth image spaces. Based on this classifier, a fast and robust people detector which maintains high detection rates in spite of environmental changes is created.The proposed method has been validated in a challenging RGB-D database of people in urban scenarios and has shown that outperforms state-of-the-art approaches in spite of the difficult environment conditions. As a result, this method is of special interest for real-time robotic applications where people detection is a key matter, such as human-robot interaction or safe navigation of mobile robots for example.

Victor Vaquero, Michael Villamizar, Alberto Sanfeliu
Visual Localization Based on Quadtrees

Autonomous mobile robots moving through their environment to perform the tasks for which they were programmed. The robot proper operation largely depends on the quality of the self localization information used when globally navigating in its environment. This paper describes a method of maintaining a self-location probability distribution of a set of states, which represents the robot position. The novel feature of this approach is to represent the state space as a Quadtree that dynamically evolves to use the minimum set of statements without loss of accuracy. We demonstrate the benefits of this approach in localizing a robot in the RoboCup SPL environment using the information provided by its camera.

Francisco Martín
A Simple, Efficient, and Scalable Behavior-Based Architecture for Robotic Applications

In the robotics field, behavior-based architectures are software systems that define how complex robot behaviors are decomposed into single units, how they access sensors and motors, and the mechanisms for communication, monitoring, and setup. This paper describes the main ideas of a simple, efficient, and scalable software architecture for robotic applications. Using a convenient design of the basic building blocks and their interaction, developers can face complex applications without any limitations. This architecture has proven to be convenient for different applications like robot soccer and therapy for Alzheimer patients.

Francisco Martín, Carlos E. Aguero, José M. Cañas
Analysis and Evaluation of a Low-Cost Robotic Arm for @Home Competitions

This paper reviews the design design, construction and performance of an affordable robotic arm of four degrees of freedom based on an Arduino controller in a home-like environment. This paper describes the kinematic design of our 4 DOF arm and the physical restrictions that this design imposes. We have also proposed two types of end-effectors to address two types of manipulation tasks: to grasp objects and to push different light switches. The arm was on board of the MYRABot platform and both were evaluated in the RoCKIn competition. This competition involves grasping and manipulation tasks that are described in the paper as well. Comments on the results of the competition and their implication in further improvement of the robot are also described in the paper.

Francisco J. Rodríguez Lera, Fernando Casado, Vicente Matellán Olivera, Francisco Martín Rico
Object Categorization from RGB-D Local Features and Bag of Words

Object categorization from robot perceptions has become one of the most well-known problems in robotics. How to select proper representations for these perceptions, specially when using RGB-D images, has received a significant attention in the last years. We present in this paper an object categorization approach from RGB-D images. This approach is based on the BoW representation, and it allows to integrate any type of 3D local feature implemented in the Point Cloud Library. The experimentation performed over the challenging RGB-D Object dataset shows how competitive object categorization systems can be developed using this procedure.

Jesus Martínez-Gómez, Miguel Cazorla, Ismael García-Varea, Cristina Romero-González
A Multisensor Based Approach Using Supervised Learning and Particle Filtering for People Detection and Tracking

People detection and tracking is an interesting skill for interactive social robots. Laser range finder (LRF) and vision based approaches are the most common although both present strengths and weaknesses. In this paper, a multisensor system to detect and track people in the proximity of a mobile robot is proposed. First, a supervised learning approach is used to recognize patterns of legs in the proximity of the robot using a LRF. After this, a tracking algorithm is developed using particle filter and the observation model of legs. Second, a Kinect sensor is used to carry out people detection and tracking. This second method uses a face detector in the color image, the color of the clothes and the depth information. The strengths and weaknesses of the second proposal are also commented. In order to put together the strengths of both sensors, a third algorithm is proposed. In this third approach both laser and Kinect data are fused to detect and track people. Finally, the multisensory approach is experimentally evaluated in a real indoor environment. The multisensor system outperforms the single sensor based approaches.

Eugenio Aguirre, Miguel García-Silvente, Daniel Pascual
Incremental Compact 3D Maps of Planar Patches from RGBD Points

The RGBD sensors have opened the door to low cost perception capabilities for robots and to new approaches on the classic problems of self localization and environment mapping. The raw data coming from these sensors are typically huge clouds of 3D colored points, which are heavy to manage. This paper describes a premilinary work on an algorithm that incrementally builds compact and dense 3D maps of planar patches from the raw data of a mobile RGBD sensor. The algorithm runs iteratively and classifies the 3D points in the current sensor reading into three categories: close to an existing patch, already contained in one patch, and far from any. The first points update the corresponding patch definition, the last ones are clustered in new patches using RANSAC and SVD. A fusion step also merges 3D patches when needed. The algorithm has been experimentally validated in the Gazebo-5 simulator.

Juan Navarro, José M. Cañas
Computing Image Descriptors from Annotations Acquired from External Tools

Visual descriptors are widely used in several recognition and classification tasks in robotics. The main challenge for these tasks is to find a descriptor that could represent the image content without losing representative information of the image. Nowadays, there exists a wide range of visual descriptors computed with computer vision techniques and different pooling strategies. This paper proposes a novel way for building image descriptors using an external tool, namely: Clarifai. This is a remote web tool that allows to automatically describe an input image using semantic tags, and these tags are used to generate our descriptor. The descriptor generation procedure has been tested in the ViDRILO dataset, where it has been compared and merged with some well-known descriptors. Moreover, subset variable selection techniques have been evaluated. The experimental results show that our descriptor is competitive in classification tasks with the results obtained with other kind of descriptors.

Jose Carlos Rangel, Miguel Cazorla, Ismael García-Varea, Jesús Martínez-Gómez, Élisa Fromont, Marc Sebban
Keypoint Detection in RGB-D Images Using Binary Patterns

Detection of keypoints in an image is a crucial step in most registration and recognition tasks. The information encoded in RGB-D images can be redundant and, usually, only specific areas in the image are useful for the classification process. The process of identifying those relevant areas is known as keypoint detection. The use of keypoints can facilitate the following stages in the image processing process by reducing the search space. To properly represent an image by means of a set of keypoints, properties like repeatability and distinctiveness have to be fullfilled. In this work, we propose a keypoint detection technique based on the Shape Binary Pattern (SBP) descriptor that can be computed from RGB-D images. Next, we rely on this method to identify the most discriminative patterns that are used to detect the most relevant keypoint. Experiments on a well-know benchmark for 3D keypoint detection have been performed to assess our proposal.

Cristina Romero-González, Jesus Martínez-Gómez, Ismael García-Varea, Luis Rodríguez-Ruiz
Unsupervised Method to Remove Noisy and Redundant Images in Scene Recognition

Mobile robotics has achieved important progress and level of maturity. Nevertheless, to increase the complexity of the tasks that mobile robots can perform in indoor environments, we need to provide them with a scene understanding of their surrounding. Scene recognition usually involves building image classifiers using training data. These classifiers work with features extracted from the images to recognize different categories. Later on, these classifiers can be used to label any image taken by the robot. The problem is that the training data used to recognize the scene might be redundant and noisy, thus reducing significantly the performance of the classifiers. To avoid this, we propose an unsupervised algorithm able to recognize when an image is unrepresentative, redundant or outlier. We have tested our algorithm in real and difficult environments achieving very promising results which take us a step closer to a complete unsupervised scene recognition with high accuracy.

David Santos-Saavedra, Roberto Iglesias, Xose M. Pardo

16th Workshop on Physical Agents

Frontmatter
Procedural City Generation for Robotic Simulation

In robotics, simulation plays a fundamental role for testing the models and techniques in a controlled environment prior to conducting experiments on real physical agents. In addition, some kind of scenarios can be easily reproduced within a simulator which is not always possible with a real robot. Building simulation environments, however, can be a tiresome and complex task. For robots performing in an urban environment, manually designing a city for testing navigation or localization algorithms can be prohibitive. As an alternative, in this work, we propose the use of procedural graphic techniques aimed at producing synthetic cities that can be employed within a robotic simulator. Experiments with the generated environments have been performed on a real simulation tool to assess the viability of the approach here proposed.

Daniel González-Medina, Luis Rodríguez-Ruiz, Ismael García-Varea
A New Cognitive Architecture for Bidirectional Loop Closing

This paper presents a novel attention-based cognitive architecture for a social robot. This architecture aims to join perception and reasoning considering a double interplay: the current task biases the perceptual process whereas perceived items determine the behaviours to be accomplished, considering the present context and role of the agent. Therefore, the proposed architecture represents a bidirectional solution to the perception-reasoning-action loop closing problem. The proposal is divided into two levels of performance, employing an Object-Based Visual Attention model as perception system and a general purpose Planning Framework at the top deliberative level. The architecture has been tested using a real and unrestricted environment that involves a real robot, time-varying tasks and daily life situations.

Antonio Jesús Palomino, Rebeca Marfil, Juan Pedro Bandera, Antonio Bandera
A Unified Internal Representation of the Outer World for Social Robotics

Enabling autonomous mobile manipulators to collaborate with people is a challenging research field with a wide range of applications. Collaboration means working with a partner to reach a common goal and it involves performing both, individual and joint actions, with her. Human-robot collaboration requires, at least, two conditions to be efficient: a) a common plan, usually under-defined, for all involved partners; and b) for each partner, the capability to infer the intentions of the other in order to coordinate the common behavior. This is a hard problem for robotics since people can change their minds on their envisaged goal or interrupt a task without delivering legible reasons. Also, collaborative robots should select their actions taking into account human-aware factors such as safety, reliability and comfort. Current robotic cognitive systems are usually limited in this respect as they lack the rich dynamic representations and the flexible human-aware planning capabilities needed to succeed in these collaboration tasks. In this paper, we address this problem by proposing and discussing a deep hybrid representation, DSR, which will be geometrically ordered at several layers of abstraction (deep) and will merge symbolic and geometric information (hybrid). This representation is part of a new agents-based robotics cognitive architecture called CORTEX. The agents that form part of CORTEX are in charge of high-level functionalities, reactive and deliberative, and share this representation among them. They keep it synchronized with the real world through sensor readings, and coherent with the internal domain knowledge by validating each update.

Pablo Bustos, Luis J. Manso, Juan P. Bandera, Adrián Romero-Garcés, Luis V. Calderita, Rebeca Marfil, Antonio Bandera
A Navigation Agent for Mobile Manipulators

Robot navigation and manipulation in partially known indoor environments is usually organized as two complementary activities, local displacement control and global path planning. Both activities have to be connected across different space and time scales in order to obtain a smooth and responsive system that follows the path and adapts to the unforeseen situations imposed by the real world. There is not a clear consensus in how to do this and some important problems are still open. In this paper we present the first steps towards a new navigation agent controlling both the robot’s base and the arm. We address several of theses problems in the design of this agent, including robust localization integrating several information sources, incremental learning of free navigation and manipulation space, hand visual servoing in camera space to reduce backslash and calibration errors, and internal path representation as an elastic band that is projected to the real world through measurements of the sensors. A set of experiments are presented with the robot Ursus in real and simulated scenarios showing some encouraging results.

Mario Haut, Luis Manso, Daniel Gallego, Mercedes Paoletti, Pablo Bustos, Antonio Bandera, Adrián Romero-Garcés
Building a Warehouse Control System Using RIDE

There is a growing interest in the use of Autonomous Guided Vehicles (AGVs) in the Warehouse Control Systems (WCS) in order to avoid installing fixed structures that complicate and reduce the flexibility to future changes. In this paper a highly flexible and hybrid operated WCS, developed using the Robotics Integrated Development Environment (RIDE), is presented. The prototype is a forklift with cognitive capabilities that can be operated manually or autonomously and it is now being tested in a warehouse located in the Parque Tecnológico Logístico (PTL) of Vigo. The main advantages and drawbacks on this kind of implementation are also discussed in the paper.

Joaquín López, Diego Pérez, Iago Vaamonde, Enrique Paz, Alba Vaamonde, Jorge Cabaleiro
Backmatter
Metadata
Title
Robot 2015: Second Iberian Robotics Conference
Editors
Luís Paulo Reis
António Paulo Moreira
Pedro U. Lima
Luis Montano
Victor Muñoz-Martinez
Copyright Year
2016
Electronic ISBN
978-3-319-27149-1
Print ISBN
978-3-319-27148-4
DOI
https://doi.org/10.1007/978-3-319-27149-1

Premium Partner