Skip to main content
Top

2020 | Book

Advances on Robotic Item Picking

Applications in Warehousing & E-Commerce Fulfillment

Editors: Dr. Albert Causo, Dr. Joseph Durham, Prof. Kris Hauser, Kei Okada, Dr. Alberto Rodriguez

Publisher: Springer International Publishing

insite
SEARCH

About this book

This book is a compilation of advanced research and applications on robotic item picking and warehouse automation for e-commerce applications. The works in this book are based on results that came out of the Amazon Robotics Challenge from 2015-2017, which focused on fully automated item picking in a warehouse setting, a topic that has been assumed too complicated to solve or has been reduced to a more tractable form of bin picking or single-item table top picking. The book’s contributions reveal some of the top solutions presented from the 50 participant teams. Each solution works to address the time-constraint, accuracy, complexity, and other difficulties that come with warehouse item picking. The book covers topics such as grasping and gripper design, vision and other forms of sensing, actuation and robot design, motion planning, optimization, machine learning and artificial intelligence, software engineering, and system integration, among others. Through this book, the authors describe how robot systems are built from the ground up to do a specific task, in this case, item picking in a warehouse setting. The compiled works come from the best robotics research institutions and companies globally.

Table of Contents

Frontmatter
Team CVAP’s Mobile Picking System at the Amazon Picking Challenge 2015
Abstract
In this paper we present the system we developed for the Amazon Picking Challenge 2015, and discuss some of the lessons learned that may prove useful to researchers and future teams developing autonomous robot picking systems. For the competition we used a PR2 robot, which is a dual arm robot research platform equipped with a mobile base and a variety of 2D and 3D sensors. We adopted a behavior tree to model the overall task execution, where we coordinate the different perception, localization, navigation, and manipulation activities of the system in a modular fashion. Our perception system detects and localizes the target objects in the shelf and it consisted of two components: one for detecting textured rigid objects using the SimTrack vision system, and one for detecting non-textured or nonrigid objects using RGBD features. In addition, we designed a set of grasping strategies to enable the robot to reach and grasp objects inside the confined volume of shelf bins. The competition was a unique opportunity to integrate the work of various researchers at the Robotics, Perception and Learning laboratory (formerly the Computer Vision and Active Perception Laboratory, CVAP) of KTH, and it tested the performance of our robotic system and defined the future direction of our research.
Kaiyu Hang, Francisco E. Viña B., Michele Colledanchise, Karl Pauwels, Alessandro Pieropan, Danica Kragic
Team UAlberta: Amazon Picking Challenge Lessons
Abstract
Our team participated in the 2015 picking challenge with a 7DOF Barrett WAM arm and hand. Our robot was placed on a fixed base and had difficulty reaching and grasping objects deep inside the shelve bins. Given limited resources we come up with a simple design, a custom-made mechanism to bring objects to the edge of the shelve bins. During trials we explored both image-based visual servoing and Kinect RGB-D vision. The former, while precise, relied on fragile video tracking. The final system used open-loop RGB-D vision and readily available open-source tools. In this chapter we log our strategy and the lessons we have learned.
Camilo Perez Quintero, Oscar Ramirez, Andy Hess, Rong Feng, Masood Dehghan, Hong Zhang, Martin Jagersand
A Soft Robotics Approach to Autonomous Warehouse Picking
Abstract
Nowadays, an increasing research effort is devoted to picking automation. Toward this goal the main challenge is the flexibility: a single robot should be able to pick objects with various shape, size, and physical properties. It was shown that soft robots, thanks to the elasticity included in their structure in design, are adaptable to grasp different objects and robust to interact in unstructured environments. In this paper, we present a soft robotics-based automatic solution for picking that embeds variable stiffness actuators and the Pisa/IIT SoftHand. This robot took part in the Amazon Picking Challenge.
Tommaso Pardi, Mattia Poggiani, Emanuele Luberto, Alessandro Raugi, Manolo Garabini, Riccardo Persichini, Manuel Giuseppe Catalano, Giorgio Grioli, Manuel Bonilla, Antonio Bicchi
Top-Down Approach to Object Identification in a Cluttered Environment Using Single RGB-D Image and Combinational Classifier for Team Nanyang’s Robotic Picker at the Amazon Picking Challenge 2015
Abstract
In order to determine proper grasp strategy for a picking robot, identification of the target item in a cluttered environment such as a shelf’s bin is necessary. However, the presence of non-target items in the same bin could result in misidentification and wrong picking. This paper presents a method that uses RGB-D image data from a training image library to improve identity estimation likelihood for each object class. Global features extracted from a set of segments in the RGB-D image are matched with those corresponding to many different segments belonging to different object classes. The ratio of the best matching scores achieved with each possible object class in the library is then converted to a class identity likelihood estimate for each object. The strategy has been successfully implemented for a robotic picking system deployed at the Amazon Picking Challenge 2015. The system could identify and differentiate between the 25 items available for picking.
Albert Causo, Sayyed Omar Kamal, Stephen Vidas, I-Ming Chen
Team KTH’s Picking Solution for the Amazon Picking Challenge 2016
Abstract
In this chapter we summarize the solution developed by team KTH for the Amazon Picking Challenge 2016 in Leipzig, Germany. The competition, which simulated a warehouse automation scenario, was divided into two parts: a picking task, where the robot picks items from a shelf and places them into a tote, and a stowing task, where the robot picks items from a tote and places them in a shelf. We describe our approach to the problem starting with a high-level overview of the system, delving later into the details of our perception pipeline and strategy for manipulation and grasping. The hardware platform used in our solution consists of a Baxter robot equipped with multiple vision sensors.
Diogo Almeida, Rares Ambrus, Sergio Caccamo, Xi Chen, Silvia Cruciani, João F. Pinto B. De Carvalho, Joshua A. Haustein, Alejandro Marzinotto, Francisco E. Viña B., Yiannis Karayiannidis, Petter Ögren, Patric Jensfelt, Danica Kragic
End-to-End Learning of Object Grasp Poses in the Amazon Robotics Challenge
Abstract
The Amazon Robotics Challenge (ARC) is a robotics competition aimed to advance warehouse automation. One of the engineering challenges is making the system robust to and being able to handle a wide variety of objects, as would be the case in a real warehouse. In this paper, we shortly describe our system used in ARC featuring a method to obtain object grasp poses containing the location of the object as well as orientation for the grasp by using a convolutional neural network with an RGB-D image as input. Through our entry in ARC 2016, we show the effectiveness of our method and the robustness of our network model to a large variety of object types in dense and unstructured environments wherein occlusions are possible.
Eiichi Matsumoto, Masaki Saito, Ayaka Kume, Jethro Tan
A Systems Engineering Analysis of Robot Motion for Team Delft’s APC Winner 2016
Abstract
In this chapter we take a systems engineering stand to perform a postmortem analysis of the design of the robot motion subsystem in Team Delft’s robot winner of the Amazon Picking Challenge 2016, understanding the benefits and limitations of the motion planning approach taken. We use an analysis framework based on Model-Based Systems Engineering with the ISE&PPOOA methodology, and a novel model of levels of robot automation. The functional approach of ISE&PPOOA helps us understand how the design decisions in the architecture of the solution impact the performance and quality attributes in capabilities required for the competition. The levels of robot automation help analyze the fundamental properties of the control schemas applied at different levels of the control architecture when handling uncertainty.
Carlos Hernandez Corbato, Mukunda Bharatheesha
Standing on Giant’s Shoulders: Newcomer’s Experience from the Amazon Robotics Challenge 2017
Abstract
International competitions have fostered innovation in fields such as artificial intelligence, robotic manipulation, and computer vision, and incited teams to push the state of the art. In this chapter, we present the approach, design philosophy and development strategy that we followed during our participation in the Amazon Robotics Challenge 2017, a competition focused on warehouse automation. After introducing our solution, we detail the development of two of its key features: the suction tool and storage system. A systematic analysis of the suction force and details of the end effector features, such as suction force control, grasping, and collision detection, are also presented. Finally, this chapter reflects on the lessons we learned from our participation in the competition, which we believe are valuable to future robot challenge participants, as well as warehouse automation system designers.
Gustavo Alfonso Garcia Ricardez, Lotfi El Hafi, Felix von Drigalski
Team C2M: Two Cooperative Robots for Picking and Stowing in Amazon Picking Challenge 2016
Abstract
Automatization for the picking and placing of a variety of objects stored on shelves is a challenging problem for robotic picking systems in distribution warehouses. Here, object recognition using image processing is especially effective at picking and placing a variety of objects. In this study, we propose an efficient method for object recognition based on object grasping position for picking robots. We use a convolutional neural network (CNN) that can achieve highly accurate object recognition. In typical CNN methods for object recognition, objects are recognized by using an image containing picking targets from which object regions suitable for grasping can be detected. However, these methods increase the computational cost because a large number of weight filters are convoluted with the whole image. The proposed method detects all graspable positions from an image as a first step. In the next step, it classifies an optimal grasping position by feeding an image of the local region at the grasping point to the CNN. By recognizing the grasping positions of the objects first, the computational cost is reduced because of the fewer convolutions of the CNN.
Hironobu Fujiyoshi, Takayoshi Yamashita, Shuichi Akizuki, Manabu Hashimoto, Yukiyasu Domae, Ryosuke Kawanishi, Masahiro Fujita, Ryo Kojima, Koji Shiratsuchi
Description of IITK-TCS System for ARC 2017
Abstract
In this chapter, we provide details of the system that was used for our participation in the Amazon Robotics Challenge 2017 held in Nagoya, Japan. Our hardware system comprised of an UR10 robot manipulator with an eye-in-hand 2D/3D vision system and a suction based gripper. Some of the novel contributions made in this work include (1) a Deep Learning based vision system for recognizing and segmenting products in a clutter; (2) a new geometry based grasping algorithm that can find graspable affordances in extreme clutter; (3) a hybrid two-finger gripper that combines both suction and gripping action; and (4) a system for automating the generation of annotated templates needed for training deep networks. The resulting system could achieve a pick rate of 2–3 objects per minute. As an outcome, the IITK-TCS team secured fifth position in the stow task, third position in the pick task and fourth position in the final round in the above challenge.
Anima Majumder, Olyvia Kundu, Samrat Dutta, Swagat Kumar, Laxmidhar Behera
Designing Cartman: A Cartesian Manipulator for the Amazon Robotics Challenge 2017
Abstract
The Amazon Robotics Challenge enlisted sixteen teams to each design a pick-and-place robot for autonomous warehousing of everyday household items. Herein we present the design of our custom-built, Cartesian robot Cartman, which won the first place in the competition finals. We highlight our integrated, experience-centred design methodology and the key aspects of our system that contributed to our competitiveness.
Jürgen Leitner, Douglas Morrison, Anton Milan, Norton Kelly-Boxall, Matthew McTaggart, Adam W. Tow, Peter Corke
Backmatter
Metadata
Title
Advances on Robotic Item Picking
Editors
Dr. Albert Causo
Dr. Joseph Durham
Prof. Kris Hauser
Kei Okada
Dr. Alberto Rodriguez
Copyright Year
2020
Electronic ISBN
978-3-030-35679-8
Print ISBN
978-3-030-35678-1
DOI
https://doi.org/10.1007/978-3-030-35679-8