Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new paper: #44

Open
wyzh0912 opened this issue Feb 23, 2025 · 0 comments
Open

Add new paper: #44

wyzh0912 opened this issue Feb 23, 2025 · 0 comments

Comments

@wyzh0912
Copy link
Contributor

Title

Improving Contextual Faithfulness of Large Language Models via Retrieval Heads-Induced Optimization

Published Date

2025-01-25

Source

arXiv

Head Name

Retrieval Head

Summary

  • Innovation: The paper introduces RHIO, a framework designed to improve contextual faithfulness in retrieval-augmented LLMs by explicitly teaching them to distinguish between faithful and unfaithful outputs using control tokens and augmented unfaithful samples generated by masking retrieval heads.

  • Tasks: The study involves augmenting unfaithful samples by masking retrieval heads, employing faithfulness-aware tuning with control tokens to teach LLMs to differentiate between faithful and unfaithful outputs, and using self-induced decoding to enhance faithfulness in long-form question answering tasks.

  • Significant Result: RHIO significantly improves the faithfulness of LLMs in long-form question answering tasks, achieving average gains in faithfulness of 12.84% and 12.59% for 7B and 13B models, respectively, even outperforming the state-of-the-art GPT-4o on the GroundBench benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant