Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 7th No.39】为 Paddle 代码转换工具新增 API 转换规则(第 6 组) #477

Merged
merged 14 commits into from
Nov 27, 2024

Conversation

Asthestarsfalll
Copy link
Contributor

@Asthestarsfalll Asthestarsfalll commented Sep 18, 2024

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d
torch.nn.functional.lp_pool2d
torch.nn.functional.threshold_
torch.nn.functional.feature_alpha_dropout
torch.nn.functional.scaled_dot_product_attention
torch.nn.LPPool1d
torch.nn.LPPool2d
torch.nn.Softmin
torch.nn.AdaptiveLogSoftmaxWithLoss
torch.nn.parameter.UninitializedParameter
torch.nn.parameter.UninitializedBuffer
torch.nn.CircularPad3d
torch.nn.utils.parametrizations.weight_norm
torch.optim.RAdam
torch.optim.NAdam

目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射

Copy link

paddle-bot bot commented Sep 18, 2024

Thanks for your contribution!

@zhwesky2010
Copy link
Collaborator

zhwesky2010 commented Sep 18, 2024

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d torch.nn.functional.lp_pool2d torch.nn.functional.threshold_ torch.nn.functional.feature_alpha_dropout torch.nn.functional.scaled_dot_product_attention torch.nn.LPPool1d torch.nn.LPPool2d torch.nn.Softmin torch.nn.AdaptiveLogSoftmaxWithLoss torch.nn.parameter.UninitializedParameter torch.nn.parameter.UninitializedBuffer torch.nn.CircularPad3d torch.nn.utils.parametrizations.weight_norm torch.optim.RAdam torch.optim.NAdam

目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射
  1. 描述的问题的方式,从Pytorch的角度来描述。对Pytorch的某些具体功能,看Paddle的表现有什么差异,分析清楚差异是否是Paddle有功能Bug,或者功能缺陷。给出明确结论,不是简单的罗列问题。
  2. 这个不是都能有现成的映射,需要去分析出对应的替代实现方式,使得对网络计算无影响即可

@paddle-bot paddle-bot bot added the contributor External developers label Sep 18, 2024
@Asthestarsfalll
Copy link
Contributor Author

PR Docs

PaddlePaddle/docs#6878

PR APIs

torch.nn.functional.lp_pool1d torch.nn.functional.lp_pool2d torch.nn.functional.threshold_ torch.nn.functional.feature_alpha_dropout torch.nn.functional.scaled_dot_product_attention torch.nn.LPPool1d torch.nn.LPPool2d torch.nn.Softmin torch.nn.AdaptiveLogSoftmaxWithLoss torch.nn.parameter.UninitializedParameter torch.nn.parameter.UninitializedBuffer torch.nn.CircularPad3d torch.nn.utils.parametrizations.weight_norm torch.optim.RAdam torch.optim.NAdam
目前问题:

  1. lp_pool torch行为与Paddle不一致,当norm_type为inf时,应该与max_pool一致,而torch返回1。
  2. 未找到UninitializedParameter与UninitializedBuffer的映射
  1. 描述的问题的方式,从Pytorch的角度来描述。对Pytorch的某些具体功能,看Paddle的表现有什么差异,分析清楚差异是否是Paddle有功能Bug,或者功能缺陷。给出明确结论,不是简单的罗列问题。
  2. 这个不是都能有现成的映射,需要去分析出对应的替代实现方式,使得对网络计算无影响即可

240919_09h21m56s_screenshot

  1. 根据PyTorch的文档,当norm_type为inf时,应该与max_pool一致,Paddle的实现在norm_type为inf时,会直接调用max_pool,而PyTorch直接计算,由于浮点数范围原因,会输出为1
  2. 了解

@zhwesky2010
Copy link
Collaborator

zhwesky2010 commented Sep 19, 2024

@Asthestarsfalll 单测失败,请保证CI通过,CI未通过不予合入:

2024-09-19 10:46:30 FAILED tests/test_nn_LPPool1d.py::test_case_3 - AssertionError: API (torch.nn...
2024-09-19 10:46:30 FAILED tests/test_nn_LPPool2d.py::test_case_6 - AssertionError: API (torch.nn...
2024-09-19 10:46:30 FAILED tests/test_nn_Softmin.py::test_case_1 - AttributeError: module 'paddle...
2024-09-19 10:46:30 FAILED tests/test_nn_Softmin.py::test_case_2 - AttributeError: module 'paddle...
2024-09-19 10:46:30 FAILED tests/test_nn_functional_lp_pool1d.py::test_case_5 - AssertionError: A...
2024-09-19 10:46:30 FAILED tests/test_nn_functional_lp_pool2d.py::test_case_6 - AssertionError: A...
2024-09-19 10:46:30 ============ 6 failed, 8152 passed, 90 skipped in 188.53s (0:03:08) ============

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll 第1点问题,写一个屏蔽版本的单测,然后在注释备注好原因

class Softmin(paddle.nn.Softmax):
def forward(self, x):
return super().forward(-x)
setattr(paddle.nn, 'Softmin', Softmin)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好像没有办法获取forward的输入,所以只能这样

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll 现在CI未通过,需要修复

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll CI未通过,请自查问题

@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll CI仍未通过,请自查问题

@zhwesky2010
Copy link
Collaborator

=========================== short test summary info ============================
2024-10-10 18:06:37 FAILED tests/test_nn_Softmin.py::test_case_1 - AttributeError: module 'paddle...
2024-10-10 18:06:37 FAILED tests/test_nn_Softmin.py::test_case_2 - AttributeError: module 'paddle...
2024-10-10 18:06:37 =========== 2 failed, 8289 passed, 105 skipped in 250.09s (0:04:10) ============

CI未通过


from apibase import APIBase

obj = APIBase("torch.nn.Softmin", is_aux_api=True)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhwesky2010 ci过不了应该是因为aux code,但是设置为True还是有问题

Copy link
Collaborator

@zhwesky2010 zhwesky2010 Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zhwesky2010 ci过不了应该是因为aux code,但是设置为True还是有问题

你本地测试有问题吗?

  • 如果本地测试就有问题,需要你自己逐行去调试。
  • 如果本地测试没问题,只在CI上有问题,我们会帮忙排查或修复CI环境的问题。

Copy link
Collaborator

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个你还是要查查Softmin哪里写的不对,逐行调试

@@ -3914,6 +3914,36 @@ def generate_code(self, kwargs):
return GenericMatcher.generate_code(self, kwargs)


class SoftminMatcher(SoftmaxMatcher):
def generate_code(self, kwargs):
self.paddle_api = "paddle.nn.Softmin"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个在json里不能配吗

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

因为要修改forward,似乎不能通过json配吧

class Softmin(paddle.nn.Softmax):
def forward(self, x):
return super().forward(-x)
setattr(paddle.nn, 'Softmin', Softmin)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个就不用setattr paddle了,直接调用paddle_aux.Softmin吧,避免别人认为paddle也有paddle.nn.Softmin

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已更新,本地测试截图:
241025_15h32m45s_screenshot

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

本地第一次运行会出现同样的错误,但是第二次运行就正常了,测试了其他使用aux code的文件,也有同样的问题,比如Softmax

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

本地第一次运行会出现同样的错误,但是第二次运行就正常了,测试了其他使用aux code的文件,也有同样的问题,比如Softmax

@zhwesky2010

@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Nov 5, 2024
@PaddlePaddle PaddlePaddle unlocked this conversation Nov 5, 2024
tests/test_optim_NAdam.py Show resolved Hide resolved
tests/test_optim_RAdam.py Show resolved Hide resolved
paconvert/api_mapping.json Show resolved Hide resolved
paconvert/api_mapping.json Show resolved Hide resolved
@PaddlePaddle PaddlePaddle locked and limited conversation to collaborators Nov 5, 2024
@PaddlePaddle PaddlePaddle unlocked this conversation Nov 5, 2024
@zhwesky2010
Copy link
Collaborator

@Asthestarsfalll 这个需要修复一下CI

@luotao1
Copy link
Collaborator

luotao1 commented Nov 25, 2024

除去torch.nn.Softmin,其他的可以先提个PR,合入一版么?

Copy link
Collaborator

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 合并下最新的吧,目前落后master的commit很多了
  2. 单测case务必全面,以下四种情况的测试case必须全部包含:
1. 传入所有参数且全部指定关键字
2. 传入所有参数且全部不指定关键字
3. 默认参数均不指定
4. 改变关键字顺序

paconvert/api_mapping.json Show resolved Hide resolved
paconvert/api_mapping.json Show resolved Hide resolved
paconvert/api_mapping.json Show resolved Hide resolved
paconvert/api_matcher.py Outdated Show resolved Hide resolved
paconvert/api_matcher.py Outdated Show resolved Hide resolved
tests/test_nn_Softmin.py Show resolved Hide resolved
tests/test_nn_functional_threshold_.py Show resolved Hide resolved
tests/test_optim_RAdam.py Show resolved Hide resolved
if self._axis is None:
return paddle.nn.functional.softmax(x, _get_softmax_dim(x.ndim))
return paddle.nn.functional.softmax(x, self._axis)
setattr(paddle.nn.Softmax, 'forward', forward)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那就不改paddle.nn.Softmax,直接把这些实现放到Softmin下面就行,遵循辅助函数最小原则,非必要不改动

out, loss = asfm(input,target)
"""
)
obj.run(pytorch_code, ["out", "loss"], check_value=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个有随机性吗,为何关闭了check_value

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这两个都有随机初始化的权重

if self._axis is None:
return paddle.nn.functional.softmax(x, _get_softmax_dim(x.ndim))
return paddle.nn.functional.softmax(x, self._axis)
setattr(paddle.nn.Softmax, 'forward', forward)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我的意思是:你不要去调整paddle.nn.Softmax,这个和paddle.nn.Softmax有什么关系呢,为什么非要去改动其他的api呢

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

已修改

@zhwesky2010 zhwesky2010 merged commit 3c73e2e into PaddlePaddle:master Nov 27, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants