eyeNotate: Interactive Annotation of Mobile Eye Tracking Data Based on Few-Shot Image Classification.

Journal: Journal of eye movement research
Published Date:

Abstract

Mobile eye tracking is an important tool in psychology and human-centered interaction design for understanding how people process visual scenes and user interfaces. However, analyzing recordings from head-mounted eye trackers, which typically include an egocentric video of the scene and a gaze signal, is a time-consuming and largely manual process. To address this challenge, we develop eyeNotate, a web-based annotation tool that enables semi-automatic data annotation and learns to improve from corrective user feedback. Users can manually map fixation events to areas of interest (AOIs) in a video-editing-style interface (baseline version). Further, our tool can generate fixation-to-AOI mapping suggestions based on a few-shot image classification model (IML-support version). We conduct an expert study with trained annotators (n = 3) to compare the baseline and IML-support versions. We measure the perceived usability, annotations' validity and reliability, and efficiency during a data annotation task. We asked our participants to re-annotate data from a single individual using an existing dataset (n = 48). Further, we conducted a semi-structured interview to understand how participants used the provided IML features and assessed our design decisions. In a post hoc experiment, we investigate the performance of three image classification models in annotating data of the remaining 47 individuals.

Authors

  • Michael Barz
    Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany; omair_shahzad.bhatti@dfki.de (O.S.B.); hasan_md_tusfiqur.alam@dfki.de (H.M.T.A.); ho_minh_duy.nguyen@dfki.de (D.M.H.N.); daniel.sonntag@dfki.de (D.S.).
  • Omair Shahzad Bhatti
    Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany; omair_shahzad.bhatti@dfki.de (O.S.B.); hasan_md_tusfiqur.alam@dfki.de (H.M.T.A.); ho_minh_duy.nguyen@dfki.de (D.M.H.N.); daniel.sonntag@dfki.de (D.S.).
  • Hasan Md Tusfiqur Alam
    Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany; omair_shahzad.bhatti@dfki.de (O.S.B.); hasan_md_tusfiqur.alam@dfki.de (H.M.T.A.); ho_minh_duy.nguyen@dfki.de (D.M.H.N.); daniel.sonntag@dfki.de (D.S.).
  • Duy Minh Ho Nguyen
    Interactive Machine Learning, German Research Center for Artificial Intelligence (DFKI), 66123 Saarbrücken, Germany; omair_shahzad.bhatti@dfki.de (O.S.B.); hasan_md_tusfiqur.alam@dfki.de (H.M.T.A.); ho_minh_duy.nguyen@dfki.de (D.M.H.N.); daniel.sonntag@dfki.de (D.S.).
  • Kristin Altmeyer
    Department of Education, Saarland University, 66123 Saarbrücken, Germany; kristin.altmeyer@uni-saarland.de (K.A.); s.malone@mx.uni-saarland.de (S.M.).
  • Sarah Malone
    Division of General Surgery, McMaster University, Hamilton, Ontario, Canada.
  • Daniel Sonntag
    Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), Stuhlsatzenhausweg 3, 66123, Saarbrücken, Deutschland. sonntag@dfki.de.

Keywords

No keywords available for this article.