OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over.

Journal: Sensors (Basel, Switzerland)
Published Date:

Abstract

In the context of collaborative robotics, handing over hand-held objects to a robot is a safety-critical task. Therefore, a robust distinction between human hands and presented objects in image data is essential to avoid contact with robotic grippers. To be able to develop machine learning methods for solving this problem, we created the OHO (Object Hand-Over) dataset of tools and other everyday objects being held by human hands. Our dataset consists of color, depth, and thermal images with the addition of pose and shape information about the objects in a real-world scenario. Although the focus of this paper is on instance segmentation, our dataset also enables training for different tasks such as 3D pose estimation or shape estimation of objects. For the instance segmentation task, we present a pipeline for automated label generation in point clouds, as well as image data. Through baseline experiments, we show that these labels are suitable for training an instance segmentation to distinguish hands from objects on a per-pixel basis. Moreover, we present qualitative results for applying our trained model in a real-world application.

Authors

  • Benedict Stephan
    Neuroinformatics and Cognitive Robotics Lab, Technische Universität Ilmenau, 98693 Ilmenau, Germany.
  • Mona Köhler
    Neuroinformatics and Cognitive Robotics Lab, Technische Universität Ilmenau, 98693 Ilmenau, Germany.
  • Steffen Müller
    Neuroinformatics and Cognitive Robotics Lab of Technische Universität Ilmenau, 98684 Ilmenau, Germany.
  • Yan Zhang
    Affiliated Hospital of Liaoning University of Traditional Chinese Medicine, Shenyang, 110032, China.
  • Horst-Michael Gross
    TU Ilmenau, Neuroinformatics and Cognitive Robotics Lab.
  • Gunther Notni
    Group for Quality Assurance and Industrial Image Processing, Technische Universität Ilmenau, 98693 Ilmenau, Germany.