SAF: An action framework to self-check the Understanding Self-Consistency of Large Language Models.
Journal:
Neural networks : the official journal of the International Neural Network Society
PMID:
40101554
Abstract
Large Language Models (LLMs), which are trained on massive text data, have demonstrated remarkable advancements in language understanding capabilities. Nevertheless, it remains unclear to what extent LLMs have effectively captured and utilized the implicit relationships inherent in the text. This study introduces 'Understanding Self-Consistency', a new perspective that reflects LLMs' ability to grasp in-depth knowledge relationships through their consistency performance. Specifically, Understanding Self-Consistency refers to the model's capacity to maintain logical and contextual consistency between inputs and responses. Inspired by human cognitive behavior, we design a self-check action framework named SAF. Wherein, a self-question and answering mechanism is emphasized and forms a logically closed loop including four classes of actions, allowing our SAF to generate, question, answer, and evaluate autonomously. Experimental results on six LLMs across two datasets show that LLMs exhibit objective ability values of the understanding self-consistency and demonstrate their differentiated grasp of knowledge relationships across different reasoning paradigms. Moreover, our findings reveal that LLMs' performance can be improved with their own outputs (which we call 'self-enhanced Feedforward'). Notably, SAF merely relies on factual logical relationships, showcasing its potential to advance the development of embodied artificial intelligence (EAI).