Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runtime error in data loader due to variable size of tensors #2

Open
praj441 opened this issue Nov 24, 2021 · 7 comments
Open

Runtime error in data loader due to variable size of tensors #2

praj441 opened this issue Nov 24, 2021 · 7 comments

Comments

@praj441
Copy link

praj441 commented Nov 24, 2021

When trying to run the train.py on my custom dataset, I am getting the following error:-

"RuntimeError: stack expects each tensor to be equal size, but got [4, 1000, 3] at entry 0 and [2, 1000, 3] at entry 1"

The data loader function, defined in data_utils.py outputs variable size tensors. Because the number of voxels output by function points_to_voxel_second() outputs the variable number of voxels.

Can you suggest some solution for this?

@Jarrome
Copy link

Jarrome commented Nov 24, 2021

in data_utils.py the 3DMatch data preprocesses of their experiment, the voxelization is working on well aligned point clouds. Which means they surely have same number of voxels after overlap detection.

Please check the code data_utils.py: 93-97 for voxelizing the well aligned pcs and 121-125 for transforming p1. It knows registration result before registrating.

So apparently in your case, voxelization is not working on well aligned point clouds, this produce the problem.

This might be a bug, I also emailed the author, waiting for reply.

@praj441
Copy link
Author

praj441 commented Nov 25, 2021

Thanks, @Jarrome for giving your insights. What I understood is that the 3DMatch point cloud data is already well-aligned. While, in my case, it might not be. Can you please clarify, what does it mean by "well-aligned point cloud data?

@Jarrome
Copy link

Jarrome commented Nov 25, 2021

Hi, @praj441. I mean they are registered before voxelization.

From Xueqian Li's email reply, you may need another global registrater before voxelization if I understand correctly. Please check if it work combined with some ransac registrater.

But I'm keep asking, as 3DMatch test then transformed back with gt "x". It is not a refinement setting.

@Lilac-Lee
Copy link
Owner

Lilac-Lee commented Nov 25, 2021

Hey @praj441, you can try: 1. finding the overlapped area between the point clouds, we only need to compute the feature for overlapped areas; 2. after finding the overlapping area, you can output the voxel index from the points_to_voxel_second() function, and try to find the corresponding voxels. Another reminder is that you should not use voxelization during training.

And thanks for @Jarrome's reply, please bear in mind that our method is still a local registration method that needs some overlapping between the source and the target point clouds. So depending on your application, you may want to find a global estimation first.

@praj441
Copy link
Author

praj441 commented Nov 26, 2021

Thanks, @Lilac-Lee, and @Jarrome for the helpful comments. I will try to implement it accordingly.

@Jarrome
Copy link

Jarrome commented Nov 26, 2021

In synthetic, no problem, as the zero-mean is with point cloud mean which merely need one point cloud itself. But on 3DMatch test, the zero-mean is with voxel-mean. And this voxel-mean is taken from two aligned point cloud. This is the difference and that's why I say the bug is only on voxelization.

"then the p1 is transformed back with a pose "x" for registration" . For coarse-to-fine registration, P---T1--->P^{1}---T2--->P^{2}.
But in the 3DMatch test, it is P---T1--->P^{1}---x--->P---PointNetLK_Revisited--->P^{2}.
T1 is somehow global result, P^{1} is one well aligned point cloud. Then with a "x", you align it back to a non-aligned cases P. This is in the data preprocessing part.

I also checked, with voxel_zero_mean=True and voxel=1 on the 3DMatch test, it is like the PointNetLK_revisited merely solve rotation during the registration. Please check!

@Lilac-Lee
Copy link
Owner

Hi @Jarrome, zero_mean is required by our method. The voxelization part is the same analog as the synthetic part. voxel_mean is required when computing global Jacobian.
Our method solves both rotation and translation, and the estimated translation compensates for the deducted mean.

Also, you can open a new issue or we can continue discussing this through email. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants