Skip to content

oceanypt/HITL-NLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 

Repository files navigation

🩺 A Collection of Alignments for Large Language Models and Beyond

👋 This is a collection of papers, surveys, etc for the research of language model alignments and beyond, covering learning from human feedback, interactive NLP, and language model alignments.

📘 Surveys

📔 Blogs

📘 Projects

📘 Leadboards (LLM evaluations)

📚 Papers

Comparing to PPO, DPO directly uses the preference data to optimize the model, without learning a reward model. Thus, the drawback of DPO is that DPO can not utilize data without human preference. You can understand DPO as a supervised learning method, but PPO is a semi-supervised learning method.


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published