Skip to content

kcisgroup/SAF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

b5b8d0f · Jan 3, 2025

History

7 Commits
Dec 16, 2024
Dec 16, 2024
Jan 3, 2025
Dec 16, 2024

Repository files navigation

Abstract

Large Language Models (LLMs), which are trained on massive text data, have demonstrated remarkable advancements in language understanding capabilities. Nevertheless, it is still an open question whether these LLMs have mastered the knowledge relationships they have been exposed to and to what extent. In this study, we concretize this abstract issue and define a new perspective of 'Understanding Self-Consistency', manifesting its mastery of knowledge relationships through the LLM's self-consistency performance. 'Understanding Self-Consistency' refers to the consistency in expressions among LLMs' inputs and responses. Inspired by human cognitive behavior, we design a self-check action framework named S 2 A F . Wherein, a self-question and answering mechanism is emphasized and forms a logically closed loop including four classes of actions, allowing our S 2 A F to generate, question, answer, and evaluate autonomously. Experimental results on six LLMs across two logical relationship datasets show that LLMs exhibit objective ability values of the understanding self-consistency and demonstrate their differentiated mastery of knowledge relationships across different reasoning paradigms. Moreover, our findings reveal that LLMs' performance can be improved with their own outputs (which we call 'self-enhanced Feedforward'). Notably, S 2 A F merely relies on factual logical relationships, this ability can effectively advance the development of embodied artificial intelligence.

Figure

dataset_distribution1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages