-
Notifications
You must be signed in to change notification settings - Fork 674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement an argument to directly set ff_inner_dim #52
base: main
Are you sure you want to change the base?
Conversation
Also, the formatter I use, changes the layout a lot so I had to manually modify the code. |
@CodiumAI-Agent /review |
PR Analysis
PR Feedback
How to use
|
@@ -134,7 +135,8 @@ def __init__( | |||
self.norm = LayerNorm(dim) | |||
|
|||
attn_inner_dim = dim_head * heads | |||
ff_inner_dim = dim * ff_mult | |||
# silently ignores ff_mult if ff_inner_dim is provided in the arguments | |||
ff_inner_dim = dim * ff_mult if not ff_inner_dim else self.ff_inner_dim |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a check to ensure that ff_inner_dim
is a positive integer if it is not None. This will prevent potential errors or unexpected behavior. [important]
@@ -134,7 +135,8 @@ def __init__( | |||
self.norm = LayerNorm(dim) | |||
|
|||
attn_inner_dim = dim_head * heads | |||
ff_inner_dim = dim * ff_mult | |||
# silently ignores ff_mult if ff_inner_dim is provided in the arguments | |||
ff_inner_dim = dim * ff_mult if not ff_inner_dim else self.ff_inner_dim | |||
self.fused_dims = (attn_inner_dim, dim_head, dim_head, (ff_inner_dim * 2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be beneficial to add a comment explaining why ff_inner_dim
is multiplied by 2 in self.fused_dims
. This would improve code readability and maintainability. [medium]
@@ -511,4 +515,4 @@ def forward( | |||
return ret | |||
|
|||
logits = rearrange(logits, 'b n c -> b c n') | |||
return F.cross_entropy(logits, labels, ignore_index = self.cross_entropy_ignore_index) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a newline at the end of the file. This is a common convention that helps with file processing in various systems. [medium]
89ab8ba
to
f721db2
Compare
In NVIDIA nvidia/GPT-2B-001, a very PaLM like model is implemented.
However, instead of a ffn multiplier like
ffn_mult
theffn_hidden_size
(comparable toffn_inner_dim
of this codebase) is directly set as 5440.This translates to a
ffn_mult
of2.65625
. However, trying this in this codebase does not work.The error
So I implemented a way to directly set the
ffn_inner_dim
please take a look!