Found a model mismatch in the MVX-Net test sample #2774
Unanswered
deyang2000
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Of course, there could be something wrong with me.
The problem I encountered was this:
(openmmlab) liyf@l526-System-Product-Name:~/mmdetection3d$ python demo/multi_modality_demo.py demo/data/kitti/00000 8.bin demo/data/kitti/000008.png demo/data/kitti/000008.pkl configs/mvxnet/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kit ti-3d-3class.py "/home/liyf/mmdetection3d/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class-8963258a.pth" --cam- type CAM2 --show
/home/liyf/mmdetection3d/mmdet3d/models/dense_heads/anchor3d_head.py:94: UserWarning: dir_offset and dir_limit_offs et will be depressed and be incorporated into box coder in the future
warnings.warn(
Loads checkpoint by local backend from path: /home/liyf/mmdetection3d/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d -3class-8963258a.pth
The model and loaded state dict do not match exactly
size mismatch for pts_middle_encoder.conv_input.0.weight: copying a param with shape torch.Size([16, 3, 3, 3, 128]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 128, 16]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer1.0.0.weight: copying a param with shape torch.Siz e([16, 3, 3, 3, 16]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 16, 16]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.0.0.weight: copying a param with shape torch.Siz e([32, 3, 3, 3, 16]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 16, 32]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.1.0.weight: copying a param with shape torch.Siz e([32, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 32]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight: copying a param with shape torch.Siz e([32, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 32]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.0.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 64]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.1.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.0.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.1.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]).
size mismatch for pts_middle_encoder.encoder_layers.encoder_layer4.2.0.weight: copying a param with shape torch.Siz e([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]).
size mismatch for pts_middle_encoder.conv_out.0.weight: copying a param with shape torch.Size([128, 3, 1, 1, 64]) f rom checkpoint, the shape in current model is torch.Size([3, 1, 1, 64, 128]).
/home/liyf/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/visualization/visualizer.py:196: UserWarni ng: Failed to add <class 'mmengine.visualization.vis_backend.LocalVisBackend'>, please provide the
save_dir
argum ent.warnings.warn(f'Failed to add {vis_backend.class}, '
/home/liyf/mmdetection3d/mmdet3d/models/layers/fusion_layers/coord_transform.py:40: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_gra d_(True), rather than torch.tensor(sourceTensor).
torch.tensor(img_meta['pcd_rotation'], dtype=dtype, device=device)
/home/liyf/anaconda3/envs/openmmlab/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgri d: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src /ATen/native/TensorShape.cpp:2894.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
There is a problem of model mismatch. In order to eliminate the problem of my mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class.py, I copied the latest version of this file, but the problem still exists.
How should I fix it? The model comes fromhttps://mmdetection3d.readthedocs.io/zh_CN/latest/user_guides/inference.html
Beta Was this translation helpful? Give feedback.
All reactions