Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Inference alters range & rotation #1287

Open
KTong821 opened this issue Mar 3, 2022 · 19 comments
Open

Question: Inference alters range & rotation #1287

KTong821 opened this issue Mar 3, 2022 · 19 comments

Comments

@KTong821
Copy link

KTong821 commented Mar 3, 2022

The demo code for LiDAR only inference (demo/pcd_demo.py, and using suggested command flags as in the README demo section) yields an output that seems to rotate the original PCD and with only half the points (in front/behind the car). Wondering if there's a way to get the bounding boxes on the original data, for the full point cloud.

Many thanks.

@ZCMax
Copy link
Collaborator

ZCMax commented Mar 4, 2022

I think the PCD won't be rotated during inference like pcd_demo.py, if you want to get the full point cloud, you can adjust the point_cloud_range in config.

@ZCMax ZCMax closed this as completed Mar 4, 2022
@ZCMax ZCMax reopened this Mar 4, 2022
@KTong821
Copy link
Author

KTong821 commented Mar 4, 2022

Thanks @ZCMax. I resolved the PCD rotation issue by skipping the LiDAR --> Depth coordinate mode change in show_result_meshlab (I use open3d). The range was successfully adjusted using point_cloud_range. Following this, however, the bounding boxes appear in free space (no overlap with any points). Would there be transformations applied to the bounding boxes prior to show_result_meshlab? I use result and data directly obtained from inference detector. Thank you.

@KTong821
Copy link
Author

KTong821 commented Mar 5, 2022

For example, running the demo in the getting_started.md but changing the point_cloud_range in configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py to [-20, -40, -3, 70.4, 40, 1] is sufficient to see a shift in prediction boxes. It seems this adjustment is not reflected in the predictions. Is there a way to adjust this in the config/elsewhere in code, or does post-prediction processing need to be done? Thanks.

@fabianwindbacher
Copy link

I encountered an issue that I think may be related to this.

I run the demo script as described on the GETTING STARTED page on the included example KITTI cloud.

See the reference cam image:

kitti_000008

Now, without any changes, I get the following results:
image

Note how the cars are captured very well - but the orientation is consistently off.

When I swap the x-dimension and y-dimension (i.e. indices 3 and 4) of the prediction bounding boxes (pred_bboxes) before the depth-cam conversion, I get the expected results:

image

my execution command:
python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py checkpoints/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth --out-dir .. --show

It does not seem to be a visualization issue, debugging shows that the bounding boxes are already misspecified before visualizing them.

Is there something that I misunderstand or misuse? Or is this a bug?

@hengjiUSTC
Copy link

Having same issue. Is this because we test demo using newest master but the model is trained on older version?

demo

@hengjiUSTC
Copy link

hengjiUSTC commented Mar 22, 2022

@fabianwindbacher Where do you swap x-dimension and y-dimension?

@fabianwindbacher
Copy link

I did it here.

@Tai-Wang
Copy link
Member

Do you use the latest master or v1.0.0rc0? We have not finished the model update after refactoring the coordinate systems, and you can train models by yourself to test it again. We are preparing all the updated models and would update them ASAP. Sorry for the inconvenience caused.

@hengjiUSTC
Copy link

hengjiUSTC commented Mar 23, 2022

Do you use the latest master or v1.0.0rc0? We have not finished the model update after refactoring the coordinate systems, and you can train models by yourself to test it again. We are preparing all the updated models and would update them ASAP. Sorry for the inconvenience caused.

We use latest master, thanks for updating

@ghost
Copy link

ghost commented Apr 7, 2022

Having same issue. Is this because we test demo using newest master but the model is trained on older version?

demo

Hey have you solved the issue? Im facing the same issue as well

@Tai-Wang
Copy link
Member

Some pretrained models have been updated. Please check them in #1369 and try to reproduce the demo with the updated models. Looking forward to your feedback.

@Zhangyongtao123
Copy link

Zhangyongtao123 commented Apr 10, 2022

Having same issue. Is this because we test demo using newest master but the model is trained on older version?
demo

Hey have you solved the issue? Im facing the same issue as well

Have you solved the issue?
I found maybe there is a bug when convert pred_bbox in lidar coordinate system to that in depth coord system.
Just modify this line
yaw = yaw + np.pi / 2
to
yaw = -yaw + np.pi / 2
It will fix this bug.

@reynoldscem
Copy link

Also appears to affect the groupfree3d model.

@SakuraRiven
Copy link

SakuraRiven commented Jul 14, 2022

Having same issue. Is this because we test demo using newest master but the model is trained on older version?
demo

Hey have you solved the issue? Im facing the same issue as well

Have you solved the issue? I found maybe there is a bug when convert pred_bbox in lidar coordinate system to that in depth coord system. Just modify this line yaw = yaw + np.pi / 2 to yaw = -yaw + np.pi / 2 It will fix this bug.

@Zhangyongtao123 Hi, I also meet the same situation. Would you mind explaining the reason to change the yaw? The predictions are supposed to have the same coordinates with input points, i.e., the LiDAR mode. So, why do we have to perform the extra mode-trans?

@Tai-Wang Hi, are we planning to address this yaw bug?

@shanmo
Copy link
Contributor

shanmo commented Jul 18, 2022

As of Sep 17, I am using https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth, and for me yaw = -yaw + np.pi / 2 does not work

image

I need to update this line to yaw = -yaw

image

@Tai-Wang
Copy link
Member

Tai-Wang commented Aug 3, 2022

Sorry for the late reply. Please @ZCMax have a check and fix the bug if necessary.

@SniperZhao
Copy link

Bug is still here.

@jafekb
Copy link

jafekb commented Mar 20, 2023

Bumping this as I'm seeing the same behavior

@VeeranjaneyuluToka
Copy link

VeeranjaneyuluToka commented Jul 18, 2024

Wondering if there is any conclusion on this, i am also getting the kind of same behavior when i run inference? Is it something problem in visualization or predictions are themselves having wrong orientation? I tried both the above solutions, they did not help me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests