Visual Relationship Detection: A Survey.

Journal: IEEE transactions on cybernetics
Published Date:

Abstract

Visual relationship detection (VRD) is one newly developed computer vision task, aiming to recognize relations or interactions between objects in an image. It is a further learning task after object recognition, and is important for fully understanding images even the visual world. It has numerous applications, such as image retrieval, machine vision in robotics, visual question answer (VQA), and visual reasoning. However, this problem is difficult since relationships are not definite, and the number of possible relations is much larger than objects. So the complete annotation for visual relationships is much more difficult, making this task hard to learn. Many approaches have been proposed to tackle this problem especially with the development of deep neural networks in recent years. In this survey, we first introduce the background of visual relations. Then, we present categorization and frameworks of deep learning models for visual relationship detection. The high-level applications, benchmark datasets, as well as empirical analysis are also introduced for comprehensive understanding of this task.

Authors

  • Jun Cheng
    School of Electrical and Information Technology, Yunnan Minzu University, Kunming, Yunnan 650500, PR China. Electronic address: jcheng6819@126.com.
  • Lei Wang
    Department of Nursing, Beijing Hospital, National Center of Gerontology, Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China.
  • Jiaji Wu
    School of Electronic Engineering, Xidian University, Xi'an, China.
  • Xiping Hu
  • Gwanggil Jeon
    Department of Embedded Systems Engineering, College of Information Technology, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon, 22012, Korea. gjeon@inu.ac.kr.
  • Dacheng Tao
  • MengChu Zhou