Trust, Trustworthiness, and the Future of Medical AI: Outcomes of an Interdisciplinary Expert Workshop.

Journal: Journal of medical Internet research
Published Date:

Abstract

Trustworthiness has become a key concept for the ethical development and application of artificial intelligence (AI) in medicine. Various guidelines have formulated key principles, such as fairness, robustness, and explainability, as essential components to achieve trustworthy AI. However, conceptualizations of trustworthy AI often emphasize technical requirements and computational solutions, frequently overlooking broader aspects of fairness and potential biases. These include not only algorithmic bias but also human, institutional, social, and societal factors, which are critical to foster AI systems that are both ethically sound and socially responsible. This viewpoint article presents an interdisciplinary approach to analyzing trust in AI and trustworthy AI within the medical context, focusing on (1) social sciences and humanities conceptualizations and legal perspectives on trust and (2) their implications for trustworthy AI in health care. It focuses on real-world challenges in medicine that are often underrepresented in theoretical discussions to propose a more practice-oriented understanding. Insights were gathered from an interdisciplinary workshop with experts from various disciplines involved in the development and application of medical AI, particularly in oncological imaging and genomics, complemented by theoretical approaches related to trust in AI. Results emphasize that, beyond common issues of bias and fairness, knowledge and human involvement are essential for trustworthy AI. Stakeholder engagement throughout the AI life cycle emerged as crucial, supporting a human- and multicentered framework for trustworthy AI implementation. Findings emphasize that trust in medical AI depends on providing meaningful, user-oriented information and balancing knowledge with acceptable uncertainty. Experts highlighted the importance of confidence in the tool's functionality, specifically that it performs as expected. Trustworthiness was shown to be not a feature but rather a relational process, involving humans, their expertise, and the broader social or institutional contexts in which AI tools operate. Trust is dynamic, shaped by interactions among individuals, technologies, and institutions, and ultimately centers on people rather than tools alone. Tools are evaluated based on reliability and credibility, yet trust fundamentally relies on human connections. The article underscores the development of AI tools that are not only technically sound but also ethically robust and broadly accepted by end users, contributing to more effective and equitable AI-mediated health care. Findings highlight that building AI trustworthiness in health care requires a human-centered, multistakeholder approach with diverse and inclusive engagement. To promote equity, we recommend that AI development teams involve all relevant stakeholders at every stage of the AI lifecycle-from conception, technical development, clinical validation, and real-world deployment.

Authors

  • Melanie Goisauf
    Department of ELSI Services and Research, Biobanking and Biomolecular Resources Research Infrastructure Consortium, Graz, Austria.
  • Mónica Cano Abadía
    Department of ELSI Services and Research, Biobanking and Biomolecular Resources Research Infrastructure Consortium, Graz, Austria.
  • Kaya Akyüz
    Department of ELSI Services and Research, Biobanking and Biomolecular Resources Research Infrastructure Consortium, Graz, Austria.
  • Maciej Bobowicz
    2nd Department of Radiology, Medical University of Gdansk, Gdansk, Poland.
  • Alena Buyx
    Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany.
  • Ilaria Colussi
    Department of ELSI Services and Research, Biobanking and Biomolecular Resources Research Infrastructure Consortium, Graz, Austria.
  • Marie-Christine Fritzsche
    Institute of History and Ethics in Medicine, Department of Preclinical Medicine, TUM School of Medicine and Health, Technical University of Munich, Ismaninger Straße 22, 81675, Munich, Germany.
  • Karim Lekadir
    Information and Communication Technologies Department, Universitat Pompeu Fabra, Barcelona, Spain.
  • Pekka Marttinen
    Department of Computer Science, Aalto University, Espoo, Finland.
  • Michaela Th Mayrhofer
    BBMRI-ERIC, Graz, Austria.
  • Janos Meszaros
    Division of Clinical Pharmacology and Pharmacotherapy, Department of Pharmaceutical and Pharmacological Sciences, KU Leuven, Leuven, Belgium.