-
Notifications
You must be signed in to change notification settings - Fork 27.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shape mismatch in RoPE embeddings gpt_neox model when rotary_ndims is odd #35233
Comments
Hi @mseeger, thanks for the bug report! First question: Does this affect any of the major gpt-neox checkpoints on the Hub, or do they all have Secondly: Would you be willing to make a PR to fix this? I think either of the two solutions you suggested are viable, as long as they don't change the output for existing models that aren't broken. |
Sure, I can do this. I could also try to find out whether any of the other models have the same issue. I'd be surprised if any models on the hub surface this issue, since otherwise their creators would have noted, no? But I can do a quick check. I'd prefer the second option. Unless I am missing something, since |
@Rocketknight1 . I found a model where the issue would likely arise: https://huggingface.co/Isotonic/gpt_neox_225M/blob/main/config.json Pretty odd one, |
https://huggingface.co/mkshing/gpt-neox-285m-init/blob/main/config.json Another one of the same size. |
Hmm, yeah - their |
Anyway, for now, I think you can just make the PR, and we can test it with models with more 'normal' inits! |
The HF approach to massively copy&paste code between different models is really painful if one has to change something. I know this seems a trade-mark, but the risk is quite high that in fixes like these, one missed some models. The copy and pasting is even done within models. |
Takes a little longer than I thought. There are almost 40 models involved. I am trying to write new tests, this takes the most time, because frankly many of these codes just do different things. I know this lowers the barrier to entry I suppose, but making changes/fixes such as this one here is painful. BTW: I also found at least 2-3 bugs affecting some of the models. Shall I send a separate PR for fixing them? @Rocketknight1 |
Woah, okay - are all 40 models copying the same code with |
That would be nice. But there are different implementations. The best one is the one of I'll not refactor anything, because I don't want to change anything. I just fix bugs. But it may be a good exercise to unify RoPE across all models that use it, so it is really just copy&paste. |
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
#35376 is the first of 2 PRs fixing this issue. I split it into two for easier reviewing. |
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`, `codegen`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
First part of resolution of huggingface#35233 - Changes related to `position_embeddings` being a mandatory argument - Remove `position_ids` argument of `apply_rotary_pos_emb` - Replace `torch.stack` by `torch.cat`, former requires equal shapes - `esm`: RoPE depends on `position_ids`, which was ignored. - `gpt_neox`: Selection of attention compute type via class removed - `gptj`, `codegen`: RoPE must be applied per head, and some shape issues. - `nemotron`: `config.partial_rotary_factor` was not implemented.
System Info
transformers
version: 4.48.0.dev0Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
I just appended the following to https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt_neox/modeling_gpt_neox.py:
Then, I ran
This gives me the following error output:
This is what I expected. Your code does not work if
rotary_ndims
is odd. Here, it is 3. The way thatcos
,sin
are computed gives them a final dim size2 * ceil(rotary_ndims / 2) == rotary_ndims + 1
, this is 1 too large.Note that your code actually "works" if
rotary_ndims = 1
. Then,cos
,sin
have final dim size 2 and the code above works due to broadcasting (1 broadcast to 2), bothq
,k
have final dim 1 too large, but that still works. But oncerotary_ndims
is odd and larger than 1, it fails.Expected behavior
Without this bug,
cos
andsin
should have sizerotary_ndims
in the final dimension, no matter whether this is even or odd. My suggestions:rotary_ndim
to be even, orcos
,sin
so their final dim size isrotary_ndims
My feeling is this does not only affect this single model, but many others as well. But I did not check.
The text was updated successfully, but these errors were encountered: