Assessing Readability of Skin Cancer Screening Resources: A Comparison of Online Websites and ChatGPT Responses.
Journal:
Journal of cancer education : the official journal of the American Association for Cancer Education
Published Date:
Jul 1, 2025
Abstract
Effective communication is essential for promoting appropriate skin cancer screening for the public. This study compares the readability of online resources and ChatGPT-generated responses related to the topic of skin cancer screening. We analyzed 60 websites and responses to five questions from ChatGPT-4.0 using five readability metrics: the Flesch-Kincaid Reading Ease, Flesch-Kincaid Grade Level, SMOG Index, Gunning Fog Index, and Coleman-Liau Index. Results showed that both websites and ChatGPT responses exceeded the recommended sixth grade reading level for health-related information. No significant differences were found between the readability for university-hosted versus non-university-hosted websites. However, across all readability metrics, ChatGPT responses were significantly more difficult to read. These findings highlight the need to enhance the accessibility of health information by aligning content with recommended literacy levels. Future efforts should focus on developing patient-centered, publicly accessible materials and refining AI-generated content to improve public understanding and encourage proactive engagement in skin cancer screenings.
Authors
Keywords
No keywords available for this article.