AI-assisted consent in paediatric medicine: ethical implications of using large language models to support decision-making.
Journal:
Journal of medical ethics
Published Date:
Aug 6, 2025
Abstract
Obtaining informed consent in paediatrics is an essential yet ethically complex aspect of clinical practice. Children have varying levels of autonomy and understanding based on their age and developmental maturity, with parents traditionally playing a central role in decision-making. However, there is increasing recognition of children's evolving capacities and their right to be involved in care decisions, raising questions about facilitating meaningful consent, or at least assent, in complex medical situations.Large language models (LLMs) may offer a partial solution to these challenges. These generative artificial intelligence (AI) systems can provide interactive, age-appropriate explanations of medical procedures, risks and outcomes tailored to each child's comprehension level. LLMs could be designed to adapt their responses to young patients' cognitive and emotional needs while supporting parents with clear, accessible medical information.This paper examines the ethical implications of using LLMs in paediatric consent, focusing on balancing autonomy promotion with protecting children's best interests. We explore how LLMs could be used to empower children to express preferences, mediate family disputes and facilitate informed consent. However, important concerns arise: Can LLMs adequately support developing autonomy? Might they exert undue influence or worsen conflicts between family members and healthcare providers?We conclude that while LLMs could enhance paediatric consent processes with appropriate safeguards and careful integration into clinical practice, their implementation must be approached cautiously. These systems should complement rather than replace the essential human elements of empathy, judgement and trust in paediatric consent.
Authors
Keywords
No keywords available for this article.