Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Vector output from TensorNet #297

Open
shenoynikhil opened this issue Feb 29, 2024 · 2 comments · May be fixed by #301
Open

[Feature Request] Vector output from TensorNet #297

shenoynikhil opened this issue Feb 29, 2024 · 2 comments · May be fixed by #301

Comments

@shenoynikhil
Copy link

shenoynikhil commented Feb 29, 2024

Feature Request

Currently the vector output is set to None. This could be easily calculated based on skew symmetric matrix based on the paper. Some papers suggest using force prediction using equivariant heads, so this would be required for that if TensorNet is used as a representation module.

We can add the following function in order to do it,

def skew_tensor_to_vector(tensor):
    """Extracts a vector from a skew-symmetric tensor.
    Based on Equation (3) in the paper. 
    Transforms tensor (num_atoms, hidden_channels, 3, 3) to (num_atoms, 3, hidden_channels)
    """
    return torch.stack((tensor[:, :, 1, 2], tensor[:, :, 2, 0], tensor[:, :, 0, 1]), dim=-1).transpose(1, 2)

And in the line

return x, None, z, pos, batch

do,

v = skew_tensor_to_vector(A) # (num_atoms, 3, hidden_channels)
return x, v, z, pos, batch

It might be useful if someone wants to use an EquivariantScalar or EquivariantVectorOutput modules on top of TensorNet.

I tested for equivariance with the EquivariantVectorOutput Module and it does work out.
Screenshot 2024-02-29 at 4 13 22 PM

Would this be right? If yes, I can also add a test.

@shenoynikhil shenoynikhil changed the title Vector output from TensorNet [Feature Request] Vector output from TensorNet Feb 29, 2024
@guillemsimeon
Copy link
Collaborator

guillemsimeon commented Mar 1, 2024

Hi! Thanks a lot for your interest, I can see that you have gone in depth through the TensorNet paper!

Regarding the new feature, you absolutely got it right. In fact we think it is an enhancement, and it will facilitate the use of vector features from TensorNet to other users that are not that familiar with the formalism. Therefore, we propose you to open a PR. Some things:

  1. I am currently not sure if vector features need to be [...,3,hidden_channels] or the other way around at that point in the TorchMD_Net full model (output of representation model), I imagine you checked it, and that both EquivariantVectorOutput and EquivariantScalar expect vectors to have that shape, otherwise it should not work.

  2. In any case, I would remove the transpose from the skewtensor_to_vector function, and I would apply it after getting v. That is:

   v = skewtensor_to_vector(A)
   v = v.transpose(-1,-2)

I think it is more consistent with the way we perform vector_to_skewtensor [..., whatever, 3] to [..., whatever, 3, 3].

  1. Take this into account:

    output_prefix = "Equivariant" if is_equivariant else ""

    TensorNet is set to is_equivariant = False to avoid that prefix and use the Scalar output, not the EquivariantScalar one. This means that if you want to build a full model with create_model, you have to specify as output model the full name 'EquivariantScalar' (I also want to mention at this point that I did not do any tests with EquivariantScalar in terms of performance).

  2. I am not sure if I understood completely your equivariance test. EquivariantVectorOutput returns just a vector per atom. Do you rename these vectors as 'forces', meaning that compute_forces = True is direct prediction of forces (without autograd)?

Thanks again, feel free to open the PR.

Guillem

@shenoynikhil
Copy link
Author

shenoynikhil commented Mar 1, 2024

I am not sure if I understood completely your equivariance test. EquivariantVectorOutput returns just a vector per atom. Do you rename these vectors as 'forces', meaning that compute_forces = True is direct prediction of forces (without autograd)?

Sorry for not clearing this. I raised this issue because some papers choose to train with predicting equivariant vector as forces (instead of autograd of energy wrt to positions). This seems to be computationally faster, so if you're pretraining you can do this and then during fine-tuning use autograd based loss (reference: section 4 of https://arxiv.org/pdf/2310.16802.pdf).
My equivariance test was something like,

rot = # rotation matrix
energy, forces = net(atomic_numbers, positions, batch)
energy_rot, forces_rot = net(atomic_numbers, positions @ rot, batch)
assert torch.allclose(forces @ rot, forces) # since equivariant
assert torch.allclose(energy, energy_rot) # since invariant

Let me start a PR.

@shenoynikhil shenoynikhil linked a pull request Mar 2, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants