Comparing large Language models and human annotators in latent content analysis of sentiment, political leaning, emotional intensity and sarcasm.

Journal: Scientific reports
PMID:

Abstract

In the era of rapid digital communication, vast amounts of textual data are generated daily, demanding efficient methods for latent content analysis to extract meaningful insights. Large Language Models (LLMs) offer potential for automating this process, yet comprehensive assessments comparing their performance to human annotators across multiple dimensions are lacking. This study evaluates the inter-rater reliability, consistency, and quality of seven state-of-the-art LLMs. These include variants of OpenAI's GPT-4, Gemini, Llama-3.1-70B, and Mixtral 8 × 7B. Their performance is compared to human annotators in analyzing sentiment, political leaning, emotional intensity, and sarcasm detection. The study involved 33 human annotators and eight LLM variants assessing 100 curated textual items. This resulted in 3,300 human and 19,200 LLM annotations. LLM performance was also evaluated across three-time points to measure temporal consistency. The results reveal that both humans and most LLMs exhibit high inter-rater reliability in sentiment analysis and political leaning assessments, with LLMs demonstrating higher reliability than humans. In emotional intensity, LLMs displayed higher reliability compared to humans, though humans rated emotional intensity significantly higher. Both groups struggled with sarcasm detection, evidenced by low reliability. Most LLMs showed excellent temporal consistency across all dimensions, indicating stable performance over time. This research concludes that LLMs, especially GPT-4, can effectively replicate human analysis in sentiment and political leaning, although human expertise remains essential for emotional intensity interpretation. The findings demonstrate the potential of LLMs for consistent and high-quality performance in certain areas of latent content analysis.

Authors

  • Ljubiša Bojić
    Institute for Artificial Intelligence Research and Development of Serbia, Fruskogorska, Novi Sad, Serbia. ljubisa.bojic@ivi.ac.rs.
  • Olga Zagovora
    Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau (RPTU), Fortstraße 7, 76829, Landau, Germany.
  • Asta Zelenkauskaite
    Vilnius Gediminas Technical University, Saulėtekio al. 11, 10223, Vilnius, Lithuania.
  • Vuk Vuković
    Faculty of Dramatic Arts, University of Montenegro, Bajova 6, 81250, Cetinje, Montenegro.
  • Milan Čabarkapa
    Faculty of Engineering, University of Kragujevac, Kragujevac, 34000, Serbia.
  • Selma Veseljević Jerković
    Faculty of Humanities and Social Sciences, Department of English Language and Literature, University of Tuzla, Tuzla, Bosnia and Herzegovina.
  • Ana Jovančević
    Faculty of Education and Health Sciences, Department of Psychology, University of Limerick, National Technological Park Limerick, Limerick, V94 T9PX, Ireland.