Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add backward of conv2d #365

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

Conversation

FatJhon
Copy link
Collaborator

@FatJhon FatJhon commented Dec 16, 2024

add backwards of input weight bias for conv.

Copy link
Collaborator

@Galaxy1458 Galaxy1458 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your contribution. Could you provide performance data for our comparative analysis?

@@ -230,7 +228,7 @@ def conv2d_forward_kernel(
@triton.autotune(
configs=[
triton.Config(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please config these at FlagGems/src/flag_gems/runtime/backend/_nvidia/tune_configs.yaml

Copy link
Collaborator

@StrongSpoon StrongSpoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is error in benchmark. please solve it.

@pytest.mark.parametrize("dilation", [1, 2])
@pytest.mark.parametrize("bias", [True, False])
def test_accuracy_conv2d(shape, kernel, stride, padding, groups, dtype, dilation, bias):
torch.manual_seed(0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is manual_seed necessary here?

revert_weight = revert_weight.transpose(1, 2).contiguous()
revert_weight = revert_weight.reshape(
groups * weight_c, out_c, weight_height, weight_width
).contiguous()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

redundant contiguous might waste resources.

if stride_height > 1 or stride_width > 1:
for i in range(out_grad.shape[2]):
for j in range(out_grad.shape[3]):
new_out[:, :, i * (stride_height), j * (stride_width)] = out_grad[
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this assignment will cost a lot of time. is there a better way?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe you can reference to the implementation in flip and use a copy_func to fill the elements in new_out.

device=device,
)

grid_weight = lambda meta: (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since size of weight is generally not large, I suggest not tiling them by BLOCK_CI_HK_WK.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants