Dietary Assessment With Multimodal ChatGPT: A Systematic Analysis.

Journal: IEEE journal of biomedical and health informatics
PMID:

Abstract

Conventional approaches to dietary assessment are primarily grounded in self-reporting methods or structured interviews conducted under the supervision of dietitians. These methods, however, are often subjective, inaccurate, and time-intensive. Although artificial intelligence (AI)-based solutions have been devised to automate the dietary assessment process, prior AI methodologies tackle dietary assessment in a fragmented landscape (e.g., merely recognizing food types or estimating portion size) and encounter challenges in their ability to generalize across a diverse range of food categories, dietary behaviors, and cultural contexts. Recently, the emergence of multimodal foundation models, such as GPT-4V, has exhibited transformative potential across a wide range of tasks in various research domains. These models have demonstrated remarkable generalist intelligence and accuracy, owing to their large-scale pre-training on broad datasets and substantially scaled model size. In this study, we explore the application of GPT-4V powering multimodal ChatGPT for dietary assessment, along with prompt engineering and passive monitoring techniques. We evaluated the proposed pipeline using a self-collected, semi free-living dietary intake dataset, captured through wearable cameras. Our findings reveal that GPT-4V excels in food detection under challenging conditions without any fine-tuning or adaptation using food-specific datasets. By guiding the model with specific language prompts (e.g., African cuisine), it shifts from recognizing common staples like rice and bread to accurately identifying regional dishes like banku and ugali. Another standout feature of GPT-4V is its contextual awareness. GPT-4V can leverage surrounding objects as scale references to deduce the portion sizes of food items, further facilitating the process of dietary assessment.

Authors

  • Frank P-W Lo
    Hamlyn Centre, Department of Surgery and Cancer, Imperial College London, London SW7 2AZ, UK. po.lo15@imperial.ac.uk.
  • Jianing Qiu
    Hamlyn Centre, Department of Computing, Imperial College London, London SW7 2AZ, UK. jianing.qiu17@imperial.ac.uk.
  • Zeyu Wang
    Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China.
  • Junhong Chen
    The Hamlyn Centre for Robotic Surgery, Imperial College London, London, SW7 2AZ, UK.
  • Bo Xiao
    Department of Urology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China.
  • Wu Yuan
    Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region of China. Electronic address: wyuan@cuhk.edu.hk.
  • Stamatia Giannarou
    Hamlyn Centre of Robotic Surgery, Department of Surgery and Cancer Imperial College London London UK.
  • Gary Frost
    Section for Nutrition Research, Department of Metabolism, Digestion and Reproduction, Imperial College London, London, United Kingdom.
  • Benny Lo