Large Language Models' Ability to Assess Main Concepts in Story Retelling: A Proof-of-Concept Comparison of Human Versus Machine Ratings.
Journal:
American journal of speech-language pathology
Published Date:
Mar 31, 2025
Abstract
PURPOSE: Despite an abundance of manual, labor-intensive discourse analysis methods, there remains a dearth of clinically convenient, psychometrically robust instruments to measure change in real-world communication in aphasia. The Brief Assessment of Transactional Success (BATS) addresses this gap while developing automated methods for analyzing story retelling discourse. This study investigated automation of main concept (MC) analysis of stories by comparing scores from three large language models (LLMs) to those of human raters.
Authors
Keywords
No keywords available for this article.