-
Notifications
You must be signed in to change notification settings - Fork 145
Add trace #992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add trace #992
Conversation
Summary of ChangesHello @Blury233, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new trace
operator implemented with Triton for improved performance. The implementation is clean and follows the project's structure. The accompanying accuracy and performance tests are comprehensive, covering various shapes and data types. I've identified a couple of minor opportunities for improvement: one for code simplification by removing a redundant check, and another for enhancing test coverage. Overall, this is a great addition.
if BLOCK_SIZE == 0: | ||
BLOCK_SIZE = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check for BLOCK_SIZE == 0
is redundant and can be removed. The num_diag
value is checked for 0 on line 66, and the function returns early if it is. Therefore, num_diag
is guaranteed to be at least 1 at this point. Since triton.next_power_of_2(x)
returns a positive integer for any positive x
, BLOCK_SIZE
will never be 0 here.
if dtype in FLOAT_DTYPES: | ||
gems_assert_close(res_out, ref_out, dtype) | ||
else: | ||
gems_assert_equal(res_out, ref_out) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test coverage is good. To make it even more robust, consider adding test cases for non-contiguous tensors, such as a transposed matrix. This would ensure that the implementation correctly handles different memory layouts via strides. You could achieve this by creating a transposed tensor within the existing test_accuracy_trace
function and running the same assertions on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
grid = (1,) | ||
BLOCK_SIZE = 1024 | ||
if num_diag < BLOCK_SIZE: | ||
BLOCK_SIZE = triton.next_power_of_2(num_diag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider partition the task of summing elements on the diagonal to several blocks when the number of elements to sum is large.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can treat it like a normal reduction on a vector whose stride is stride_dim0 + stride_dim1.
Since torch.trace's cpu implementation does not support bool, the test fail on op-test-quick-cpu. Please remove torch.bool for testing trace, or only test it when the reference is not on cpu.
|
PR Category
Operator
Type of Change
New Feature
Description
In comparison with various reduction methods, the single-thread block shows better performance.
Issue
Progress
Performance
Accuracy Test

Performance Test

