Comparing traditional natural language processing and large language models for mental health status classification: a multi-model evaluation.
Journal:
Scientific reports
Published Date:
Jul 6, 2025
Abstract
The substantial increase in mental health disorders globally necessitates scalable, accurate tools for detecting and classifying these conditions in digital environments. This study addresses the critical challenge of automated mental health classification by comparing three distinct computational approaches: (1) Traditional Natural Language Processing (NLP) with advanced feature engineering, (2) Prompt-engineered large language models (LLMs), and (3) Fine-tuned LLMs. The dataset consisted of over 51,000 publicly available text statements from social media platforms, tagged with seven mental health conditions: Normal, Depression, Suicidal, Anxiety, Stress, Bipolar Disorder, and Personality Disorder. The dataset was stratified into training, validation, and test sets for model evaluation. The primary outcome was classification accuracy across these seven mental health conditions. Additional metrics like precision, recall, and F1-score were analyzed. We compared the results of the three computational approaches and overfitting was monitored through validation loss across epochs for the fine-tuned LLM. The NLP model with advanced feature engineering achieved an overall accuracy of 95%, surpassing both the prompt-engineered LLM (65%) and the fine-tuned LLM (91%). This model performed exceptionally well in terms of accuracy and precision. While fine-tuning for three epochs yielded optimal results, further training led to overfitting and decreased performance. This study demonstrates the significant benefits of applying advanced text preprocessing and feature engineering techniques to traditional NLP models, alongside fine-tuning LLMs, such as GPT-4o-mini, for mental health classification tasks. The results clearly indicate that off-the-shelf LLM chatbots using prompt engineering are inadequate for mental health classification, performing 30% points below specialized NLP approaches. Despite the popularity of general-purpose LLMs, specialized approaches remain superior for critical healthcare applications like mental health classification.