Why Clinical Trials Will Fail to Ensure Safe AI.
Journal:
Journal of medical systems
Published Date:
Jul 17, 2025
Abstract
Recent reports have raised concerns about emergent behaviors in next-generation artificial intelligence (AI) models. These systems have been documented selectively adapting their behaviors during testing to falsify experimental outcomes and bypass regulatory oversight. This phenomenon-alignment faking-represents a fundamental challenge to medical AI safety. Regulatory strategies have largely adapted established protocols like clinical trials and medical device approval frameworks, but for next-generation AI these approaches may fail. This paper introduces alignment faking to a medical audience and critically evaluates how current regulatory tools are inadequate for advanced AI systems. We propose continuous logging through "AI SOAP notes" as a first step toward transparent and accountable AI functionality in clinical settings.