Skip to content

Commit 41d3082

Browse files
Add Unsloth to RLHF.md (vllm-project#21636)
1 parent 7cfea0d commit 41d3082

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

docs/training/rlhf.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,14 @@
22

33
Reinforcement Learning from Human Feedback (RLHF) is a technique that fine-tunes language models using human-generated preference data to align model outputs with desired behaviors.
44

5-
vLLM can be used to generate the completions for RLHF. The best way to do this is with libraries like [TRL](https://github.com/huggingface/trl), [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) and [verl](https://github.com/volcengine/verl).
5+
vLLM can be used to generate the completions for RLHF. Some ways to do this include using libraries like [TRL](https://github.com/huggingface/trl), [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [verl](https://github.com/volcengine/verl) and [unsloth](https://github.com/unslothai/unsloth).
66

77
See the following basic examples to get started if you don't want to use an existing library:
88

99
- [Training and inference processes are located on separate GPUs (inspired by OpenRLHF)](../examples/offline_inference/rlhf.md)
1010
- [Training and inference processes are colocated on the same GPUs using Ray](../examples/offline_inference/rlhf_colocate.md)
1111
- [Utilities for performing RLHF with vLLM](../examples/offline_inference/rlhf_utils.md)
12+
13+
See the following notebooks showing how to use vLLM for GRPO:
14+
15+
- [Qwen-3 4B GRPO using Unsloth + vLLM](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_(4B)-GRPO.ipynb)

0 commit comments

Comments
 (0)