-
Notifications
You must be signed in to change notification settings - Fork 144
[operator] add silu_and_mul C++ wrapper #982
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @qianwan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request successfully adds a C++ wrapper for the silu_and_mul
operator, including the Triton kernel, C++ implementation, tests, and Python bindings. The code is well-structured and the changes are logical. My review includes a couple of suggestions for improvement. One is an efficiency improvement in the C++ _out
variant to avoid an unnecessary memory copy. The other is a small refactoring in the test code to improve clarity by using torch::silu
directly. Overall, this is a solid contribution.
auto result_dtype = torch::result_type(x, y); | ||
auto x_fp32 = x.to(torch::kFloat32); | ||
auto y_fp32 = y.to(torch::kFloat32); | ||
auto silu = torch::sigmoid(x_fp32) * x_fp32; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For clarity and maintainability, it's better to use torch::silu
directly in your reference computation instead of manually implementing it with torch::sigmoid(x_fp32) * x_fp32
. This makes the code more readable and directly expresses the intent.
auto silu = torch::sigmoid(x_fp32) * x_fp32; | |
auto silu = torch::silu(x_fp32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As Gemini suggest, use torch::silu
in the reference implementation.
at::Tensor out_contig; | ||
if (out.is_contiguous()) { | ||
out_contig = out; | ||
} else { | ||
out_contig = out.contiguous(); | ||
} | ||
launch_silu_and_mul(x, y, out_contig); | ||
if (!out.is_contiguous()) { | ||
out.copy_(out_contig); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation for handling non-contiguous output tensors in silu_and_mul_out
involves a potentially expensive data copy. When out
is not contiguous, out.contiguous()
is called, which creates a new tensor and copies the contents of out
. Since out
is an output tensor, its initial contents are not needed, making this copy unnecessary.
You can improve efficiency by creating an empty tensor with the same properties as out
instead. This avoids the unnecessary copy.
auto out_contig = out.is_contiguous() ? out : at::empty_like(out);
launch_silu_and_mul(x, y, out_contig);
if (!out.is_contiguous()) {
out.copy_(out_contig);
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The same idea with Gemini.
if (out.is_contiguous()) { | ||
out_contig = out; | ||
} else { | ||
out_contig = out.contiguous(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no need to permute non-contiguous out
to get a contiguous out_contig
. Just create a contiguous Tensor is fine. Since it is written to, not read from.
and hasattr(torch.ops, "flag_gems") | ||
and hasattr(torch.ops.flag_gems, "silu_and_mul") | ||
and hasattr(torch.ops.flag_gems, "silu_and_mul_out") | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Skip this and just use the global flag. The condition to use C++ implementation is that:
- has_c_extension The library is built with c extenasion; and
- use_c_extension. Controlled by an environment variable.
PR Category
Type of Change
Description
Issue
Progress
Performance