Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

centerpoint_pillar pretrained model error #1592

Open
iloveai8086 opened this issue Jun 28, 2022 · 11 comments
Open

centerpoint_pillar pretrained model error #1592

iloveai8086 opened this issue Jun 28, 2022 · 11 comments
Assignees

Comments

@iloveai8086
Copy link

Hello.
when i am using the following command:
python demo/pcd_demo.py --pcd demo/data/nuscenes/1.bin --config configs/centerpoint/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus.py --checkpoint checkpoints/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624-0f3299c0.pth --show

and the model is downloaded from the model link in configs/centerpoint

Run the above command and it will get the following results:
load checkpoint from local path: checkpoints/centerpoint_02pillar_second_secfpn_circlenms_4x8_cyclic_20e_nus_20210816_064624-0f3299c0.pth
The model and loaded state dict do not match exactly
size mismatch for pts_voxel_encoder.pfn_layers.0.linear.weight: copying a param with shape torch.Size([64, 10]) from checkpoint, the shape in current model is torch.Size([64, 11])

and my env list:
mmcls 0.23.1
mmcv-full 1.5.3
mmdeploy 0.5.0
mmdet 2.25.0
mmdet3d 1.0.0rc3
mmsegmentation 0.25.0

Is this a bug, or the model of centerpoint pillar is not updated?

@Tai-Wang
Copy link
Member

Tai-Wang commented Jul 3, 2022

@ZCMax Please check this problem. Maybe it is another problem related to the previous KITTI performance upgrading of PointPillars.

@hlhzau
Copy link

hlhzau commented Jul 21, 2022

also encountered this problem. What caused this problem? Has it been solved?

@rkotimi
Copy link

rkotimi commented Jul 25, 2022

@ZCMax Hello! When can you solve this problem? I need the pretrained model.

@ZCMax
Copy link
Collaborator

ZCMax commented Jul 25, 2022

@ZCMax Hello! When can you solve this problem? I need the pretrained model.

This checkpoint will be provided in one or two days, and other centerpoint checkpoints will be provided in this week.

@ZCMax
Copy link
Collaborator

ZCMax commented Jul 25, 2022

@ZCMax Hello! When can you solve this problem? I need the pretrained model.

Sorry for the inconvenience caused by the model checkpoint problem

@rkotimi
Copy link

rkotimi commented Jul 25, 2022

@ZCMax Hello! When can you solve this problem? I need the pretrained model.

Sorry for the inconvenience caused by the model checkpoint problem

Thank you for your reply. I will wait for the new checkpoint.

@rkotimi
Copy link

rkotimi commented Jul 29, 2022

@ZCMax Hello! When can you solve this problem? I need the pretrained model.

This checkpoint will be provided in one or two days, and other centerpoint checkpoints will be provided in this week.

@ZCMax Hello! I wonder if the new checkpoints are available. It seems the checkpoint links are not updated in this page

@hlhzau
Copy link

hlhzau commented Jul 31, 2022

@ZCMax I wonder if the new checkpoints are available?

@ZCMax
Copy link
Collaborator

ZCMax commented Aug 1, 2022

I've prepared a pretrained model of centerpoint_02pillar_second_secfpn_4x8_cyclic_20e_nus for temporal link:

link: https://pan.baidu.com/s/1u1dS6XPbzhvrMNfquuoeuA?pwd=g5qi
password: g5qi

@rahuja123
Copy link

Is there a non baidu link?

@jaan1729
Copy link
Contributor

jaan1729 commented Oct 1, 2023

I tried to use pretrained model from the documentation for centerpoint_pillar02_second_secfpn_8xb4-cyclic-20e_nus-3d.py to test Centerpoint with the following command.

python tools/test.py --task lidar_det configs/centerpoint/centerpoint_pillar02_second_secfpn_8xb4-cyclic-20e_nus-3d.py checkpoints/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20220810_030004-9061688e.pth --show --show-dir ./data/centerpoint/show_results

I'm facing weights mismatch issue with the following log....

Loads checkpoint by local backend from path: checkpoints/centerpoint/centerpoint_01voxel_second_secfpn_circlenms_4x8_cyclic_20e_nus_20220810_030004-9061688e.pth

The model and loaded state dict do not match exactly

size mismatch for pts_backbone.blocks.0.0.weight: copying a param with shape torch.Size([128, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for pts_backbone.blocks.0.1.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.1.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.1.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.1.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.3.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for pts_backbone.blocks.0.4.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.4.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.4.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.4.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.6.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for pts_backbone.blocks.0.7.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.7.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.7.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.7.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.9.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for pts_backbone.blocks.0.10.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.10.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.10.running_mean: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.0.10.running_var: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for pts_backbone.blocks.1.0.weight: copying a param with shape torch.Size([256, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
size mismatch for pts_backbone.blocks.1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.3.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for pts_backbone.blocks.1.4.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.4.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.4.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.4.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.6.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for pts_backbone.blocks.1.7.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.7.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.7.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.7.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.9.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for pts_backbone.blocks.1.10.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.10.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.10.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.10.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.12.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for pts_backbone.blocks.1.13.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.13.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.13.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.13.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.15.weight: copying a param with shape torch.Size([256, 256, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for pts_backbone.blocks.1.16.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.16.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.16.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_backbone.blocks.1.16.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.0.0.weight: copying a param with shape torch.Size([256, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 2, 2]).
size mismatch for pts_neck.deblocks.0.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.0.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.0.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.0.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.1.0.weight: copying a param with shape torch.Size([256, 256, 2, 2]) from checkpoint, the shape in current model is torch.Size([128, 128, 1, 1]).
size mismatch for pts_neck.deblocks.1.1.weight: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.1.1.bias: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.1.1.running_mean: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_neck.deblocks.1.1.running_var: copying a param with shape torch.Size([256]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for pts_bbox_head.shared_conv.conv.weight: copying a param with shape torch.Size([64, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 384, 3, 3]).
unexpected key in source state_dict: pts_middle_encoder.conv_input.0.weight, pts_middle_encoder.conv_input.1.weight, pts_middle_encoder.conv_input.1.bias, pts_middle_encoder.conv_input.1.running_mean, pts_middle_encoder.conv_input.1.running_var, pts_middle_encoder.conv_input.1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer1.0.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer1.0.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer1.0.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer1.1.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer1.1.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer1.1.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer1.2.0.weight, pts_middle_encoder.encoder_layers.encoder_layer1.2.1.weight, pts_middle_encoder.encoder_layers.encoder_layer1.2.1.bias, pts_middle_encoder.encoder_layers.encoder_layer1.2.1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer1.2.1.running_var, pts_middle_encoder.encoder_layers.encoder_layer1.2.1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer2.0.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer2.0.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer2.0.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer2.1.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer2.1.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer2.1.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight, pts_middle_encoder.encoder_layers.encoder_layer2.2.1.weight, pts_middle_encoder.encoder_layers.encoder_layer2.2.1.bias, pts_middle_encoder.encoder_layers.encoder_layer2.2.1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer2.2.1.running_var, pts_middle_encoder.encoder_layers.encoder_layer2.2.1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer3.0.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer3.0.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer3.0.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer3.1.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer3.1.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer3.1.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight, pts_middle_encoder.encoder_layers.encoder_layer3.2.1.weight, pts_middle_encoder.encoder_layers.encoder_layer3.2.1.bias, pts_middle_encoder.encoder_layers.encoder_layer3.2.1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer3.2.1.running_var, pts_middle_encoder.encoder_layers.encoder_layer3.2.1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer4.0.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer4.0.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer4.0.bn2.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer4.1.conv1.weight, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn1.weight, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn1.bias, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn1.running_mean, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn1.running_var, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn1.num_batches_tracked, pts_middle_encoder.encoder_layers.encoder_layer4.1.conv2.weight, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn2.weight, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn2.bias, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn2.running_mean, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn2.running_var, pts_middle_encoder.encoder_layers.encoder_layer4.1.bn2.num_batches_tracked, pts_middle_encoder.conv_out.0.weight, pts_middle_encoder.conv_out.1.weight, pts_middle_encoder.conv_out.1.bias, pts_middle_encoder.conv_out.1.running_mean, pts_middle_encoder.conv_out.1.running_var, pts_middle_encoder.conv_out.1.num_batches_tracked, pts_backbone.blocks.0.12.weight, pts_backbone.blocks.0.13.weight, pts_backbone.blocks.0.13.bias, pts_backbone.blocks.0.13.running_mean, pts_backbone.blocks.0.13.running_var, pts_backbone.blocks.0.13.num_batches_tracked, pts_backbone.blocks.0.15.weight, pts_backbone.blocks.0.16.weight, pts_backbone.blocks.0.16.bias, pts_backbone.blocks.0.16.running_mean, pts_backbone.blocks.0.16.running_var, pts_backbone.blocks.0.16.num_batches_tracked

missing keys in source state_dict: pts_voxel_encoder.pfn_layers.0.norm.weight, pts_voxel_encoder.pfn_layers.0.norm.bias, pts_voxel_encoder.pfn_layers.0.norm.running_mean, pts_voxel_encoder.pfn_layers.0.norm.running_var, pts_voxel_encoder.pfn_layers.0.linear.weight, pts_backbone.blocks.2.0.weight, pts_backbone.blocks.2.1.weight, pts_backbone.blocks.2.1.bias, pts_backbone.blocks.2.1.running_mean, pts_backbone.blocks.2.1.running_var, pts_backbone.blocks.2.3.weight, pts_backbone.blocks.2.4.weight, pts_backbone.blocks.2.4.bias, pts_backbone.blocks.2.4.running_mean, pts_backbone.blocks.2.4.running_var, pts_backbone.blocks.2.6.weight, pts_backbone.blocks.2.7.weight, pts_backbone.blocks.2.7.bias, pts_backbone.blocks.2.7.running_mean, pts_backbone.blocks.2.7.running_var, pts_backbone.blocks.2.9.weight, pts_backbone.blocks.2.10.weight, pts_backbone.blocks.2.10.bias, pts_backbone.blocks.2.10.running_mean, pts_backbone.blocks.2.10.running_var, pts_backbone.blocks.2.12.weight, pts_backbone.blocks.2.13.weight, pts_backbone.blocks.2.13.bias, pts_backbone.blocks.2.13.running_mean, pts_backbone.blocks.2.13.running_var, pts_backbone.blocks.2.15.weight, pts_backbone.blocks.2.16.weight, pts_backbone.blocks.2.16.bias, pts_backbone.blocks.2.16.running_mean, pts_backbone.blocks.2.16.running_var, pts_neck.deblocks.2.0.weight, pts_neck.deblocks.2.1.weight, pts_neck.deblocks.2.1.bias, pts_neck.deblocks.2.1.running_mean, pts_neck.deblocks.2.1.running_var

@ZCMax , is there any reason for not making the model available in Baidu as official?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants