You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope this message finds you well. I am currently using VeriNet to analyze an ONNX model that I have converted from a PyTorch model. The model is a simple neural network that outputs a tuple of two tensors, action and value, from its forward method.
When I run the model through VeriNet, it seems to be interpreting the output of the model as a list of None values, rather than a tuple of tensors. This is causing issues when VeriNet tries to create an Objective instance and perform robustness analysis, as it is expecting the output size of the model to be non-zero.
By the way I have trained a model using (Proximal policy optimization ) PPO. My model expects two separate inputs (lidar and state), input dimension is 340.
Here is the relevant code from my model's forward method:
defforward(self, x):
policy=self.policy_net(x)
value=self.value_net(x)
action=self.action_layer(policy)
value=self.value_layer(value)
returnaction, value
And here is the output I'm seeing when I run the model through VeriNet:
Outputs: [None, None, None, None, None, None, None, None, None, None]
Outputs are None or not tensors
The output size of the model is: 0
Cannot create Objective instance because output_size is 0
Cannot perform robustness analysis because output_size is 0
Cannot check robustness because output_size is 0
I have verified that the model's forward method is working correctly and returning a tuple of tensors when run outside of VeriNet. I have also checked the model's state and the inputs it's receiving, and everything seems to be in order.
I would appreciate any guidance you could provide on this issue. Is there a specific way I should be structuring the output of my model to ensure compatibility with VeriNet? Or could this be an issue with how VeriNet is handling the output of ONNX models?
Thank you for your time and assistance.
Best regards,
The text was updated successfully, but these errors were encountered:
Dear VeriNet Team,
I hope this message finds you well. I am currently using VeriNet to analyze an ONNX model that I have converted from a PyTorch model. The model is a simple neural network that outputs a tuple of two tensors,
action
andvalue
, from its forward method.When I run the model through VeriNet, it seems to be interpreting the output of the model as a list of
None
values, rather than a tuple of tensors. This is causing issues when VeriNet tries to create an Objective instance and perform robustness analysis, as it is expecting the output size of the model to be non-zero.By the way I have trained a model using (Proximal policy optimization ) PPO. My model expects two separate inputs (lidar and state), input dimension is 340.
Here is the relevant code from my model's forward method:
And here is the output I'm seeing when I run the model through VeriNet:
I have verified that the model's forward method is working correctly and returning a tuple of tensors when run outside of VeriNet. I have also checked the model's state and the inputs it's receiving, and everything seems to be in order.
I would appreciate any guidance you could provide on this issue. Is there a specific way I should be structuring the output of my model to ensure compatibility with VeriNet? Or could this be an issue with how VeriNet is handling the output of ONNX models?
Thank you for your time and assistance.
Best regards,
The text was updated successfully, but these errors were encountered: