Large Language Models (LLMs), which are trained on massive text data, have demonstrated remarkable advancements in language understanding capabilities. Nevertheless, it is still an open question whether these LLMs have mastered the knowledge relationships they have been exposed to and to what extent. In this study, we concretize this abstract issue and define a new perspective of 'Understanding Self-Consistency', manifesting its mastery of knowledge relationships through the LLM's self-consistency performance. 'Understanding Self-Consistency' refers to the consistency in expressions among LLMs' inputs and responses. Inspired by human cognitive behavior, we design a self-check action framework named
-
Notifications
You must be signed in to change notification settings - Fork 0
kcisgroup/SAF
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published