Skip to content

Conversation

hamishivi
Copy link
Collaborator

Adding deepspeed ulysses for extreme long-context training, controllable via sequence_parallel_size. If set to n>1, splits sequences across n GPUs during training.
Loosely following: https://www.deepspeed.ai/tutorials/ulysses-alst-sequence-parallelism/
This means we can apply it to any HF model! Not sure how it will behave with newer OLMos.
Testing now hence draft.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants