Forewarning Artificial Intelligence about Cognitive Biases.
Journal:
Medical decision making : an international journal of the Society for Medical Decision Making
Published Date:
Jun 24, 2025
Abstract
Artificial intelligence models display human-like cognitive biases when generating medical recommendations. We tested whether an explicit forewarning, "Please keep in mind cognitive biases and other pitfalls of reasoning," might mitigate biases in OpenAI's generative pretrained transformer large language model. We used 10 clinically nuanced cases to test specific biases with and without a forewarning. Responses from the forewarning group were 50% longer and discussed cognitive biases more than 100 times more frequently compared with responses from the control group. Despite these differences, the forewarning decreased overall bias by only 6.9%, and no bias was extinguished completely. These findings highlight the need for clinician vigilance when interpreting generated responses that might appear seemingly thoughtful and deliberate.HighlightsArtificial intelligence models can be warned to avoid racial and gender bias.Forewarning artificial intelligence models to avoid cognitive biases does not adequately mitigate multiple pitfalls of reasoning.Critical reasoning remains an important clinical skill for practicing physicians.
Authors
Keywords
No keywords available for this article.