-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added vector output from skew-symmetric tensor #301
base: main
Are you sure you want to change the base?
Conversation
Thanks a lot! Can you add the equivariance test, please? I understood what you meant in the issue (it just confused me to see them denotes as 'forces' and not specifying further :) ) |
I should be able to add it today. Sorry for the delay. |
Don’t rush! Any day will be fine.
Thank you!
…On Sun, 3 Mar 2024 at 18:51, Nikhil Shenoy ***@***.***> wrote:
I should be able to add it today. Sorry for the delay.
—
Reply to this email directly, view it on GitHub
<#301 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ANJMOAYKVMROBKLKKWDTYNTYWNPLTAVCNFSM6AAAAABECTM5SWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZVGI2DGNZWG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Added an equivariance test. |
Hey! Sorry for the delay. Revisiting this: Your approach seems correct @shenoynikhil, however I think I would not proceed by adding a new model that is called equivariant-tensornet. This makes me think, @RaulPPelaez, that we should get rid of the is_equivariant flag, and directly modify the places where the ET is used (yaml example files, for example), and directly include there EquivariantScalar, instead of Scalar, as output model, where it is used. What do you think? I don't know how many things this change would impact, but in any case I think it is worth doing it, the flag does not make much sense at this point... For me, I would say it is fine if TensorNet defaults to returning x, v instead of x, None. Even though TensorNet uses Scalar (not EquivariantScalar), for energy predictions, I have the impression that nothing would happen. As I said above at some point, v comes from A, which has already been used by learnable parameters to generate x. Using Scalar, no new learnable things would act independently on v, and therefore we should not get an error pointing to not using parameters that are trainable in the forward. What do you think guys? |
torchmdnet/models/tensornet.py
Outdated
@@ -120,6 +124,7 @@ class TensorNet(nn.Module): | |||
(default: :obj:`True`) | |||
check_errors (bool, optional): Whether to check for errors in the distance module. | |||
(default: :obj:`True`) | |||
vector_output (bool, optional): Whether to return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is cut off mid way it seems
I agree because it kind of implies that there is a non-equivariant tensornet.
I do not like this because even if autograd eats it it is extra unnecessary work. I believe gating the functionality behind an user-provided function or a model name is best in this case.
I agree with this, but I believe it is out of the scope of this PR. It would be great if you open an issue so we have it in our list... |
Allowing vector output from TensorNet. This will allow the use of
EquivariantScalar
orEquivariantVectorOutput
fromTensorNet
.Resolves #297.