Foundation models for radiology-the position of the AI for Health Imaging (AI4HI) network.

Journal: Insights into imaging
Published Date:

Abstract

Foundation models are large models trained on big data which can be used for downstream tasks. In radiology, these models can potentially address several gaps in fairness and generalization, as they can be trained on massive datasets without labelled data and adapted to tasks requiring data with a small number of descriptions. This reduces one of the limiting bottlenecks in clinical model construction-data annotation-as these models can be trained through a variety of techniques that require little more than radiological images with or without their corresponding radiological reports. However, foundation models may be insufficient as they are affected-to a smaller extent when compared with traditional supervised learning approaches-by the same issues that lead to underperforming models, such as a lack of transparency/explainability, and biases. To address these issues, we advocate that the development of foundation models should not only be pursued but also accompanied by the development of a decentralized clinical validation and continuous training framework. This does not guarantee the resolution of the problems associated with foundation models, but it enables developers, clinicians and patients to know when, how and why models should be updated, creating a clinical AI ecosystem that is better capable of serving all stakeholders. CRITICAL RELEVANCE STATEMENT: Foundation models may mitigate issues like bias and poor generalization in radiology AI, but challenges persist. We propose a decentralized, cross-institutional framework for continuous validation and training to enhance model reliability, safety, and clinical utility. KEY POINTS: Foundation models trained on large datasets reduce annotation burdens and improve fairness and generalization in radiology. Despite improvements, they still face challenges like limited transparency, explainability, and residual biases. A decentralized, cross-institutional framework for clinical validation and continuous training can strengthen reliability and inclusivity in clinical AI.

Authors

  • José Guilherme de Almeida
    Champalimaud Foundation, Lisbon, Portugal. jose.almeida@research.fchampalimaud.org.
  • Leonor Cerdá Alberich
    La Fe Health Research Institute, Valencia, Spain.
  • Gianna Tsakou
    MAGGIOLI S.P.A., Research and Development Lab, Marousi, Greece.
  • Kostas Marias
    Computational BioMedicine Laboratory, FORTH-ICS, Heraklion, Crete, Greece.
  • Manolis Tsiknakis
    Computational BioMedicine Laboratory, FORTH-ICS, Heraklion, Crete, Greece.
  • Karim Lekadir
    Information and Communication Technologies Department, Universitat Pompeu Fabra, Barcelona, Spain.
  • Luis Marti-Bonmati
    QUIBIM SL, Valencia, Spain.
  • Nikolaos Papanikolaou
    Computational Biomedicine Laboratory (CBML), Foundation for Research and Technology Hellas (FORTH), 70013 Heraklion, Greece.

Keywords

No keywords available for this article.