Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert DAB-deformable-DETR to ONNX #57

Open
sazani opened this issue Dec 27, 2022 · 1 comment
Open

Convert DAB-deformable-DETR to ONNX #57

sazani opened this issue Dec 27, 2022 · 1 comment

Comments

@sazani
Copy link

sazani commented Dec 27, 2022

I am trying to convert the generated model that I trained and also your pretrained model to ONNX but unfortunately I faced the following error message:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0, and cpu! (when checking argument for argument index in method wrapper__index_select)
By the way I have used static and dynamic input and I have used the following code:

import torch.onnx
import os, sys
import torch
import numpy as np

from models import build_dab_deformable_detr
from util.slconfig import SLConfig
import torchvision
import torchvision.transforms.functional as TF

from PIL import Image
import transforms as T

import cv2
import argparse
device = torch.device('cuda:0' )

if name == "main":
parser = argparse.ArgumentParser()
parser.add_argument('--model_checkpoint_path', help="change the path of the model checkpoint.",
default="./Checkpoints/checkpoint.pt")
parser.add_argument('--model_config_path', help="change the path of the model config file",
default="./Checkpoints/config.json")
args = parser.parse_args()
model_config_path = args.model_config_path
model_checkpoint_path = args.model_checkpoint_path
args_config = SLConfig.fromfile(model_config_path)
model, criterion, post_processors = build_dab_deformable_detr(args_config)
checkpoint = torch.load(model_checkpoint_path, map_location=device)
model.load_state_dict(checkpoint['model'])
model = model.to(device)
img_size =[1080,1920]
input = torch.zeros(1, 3, *img_size)
input = input.to(device)
model.eval()
results =model(input)
torch.onnx.export(
model,
input,
"test.onnx",
input_names=["input"],
output_names=["output"],
export_params=True,
opset_version=11, # I have also tried version 12,13,14,15
# dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'}, # shape(1,3,640,640)
# 'output': {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
# } ,#if dynamic else None
dynamic_axes = None,
)

@Zalways
Copy link

Zalways commented Aug 9, 2023

have you solve this problem? i try to export the model into torchscript,(device is cpu),i met some problem: I finally got the model through the trace method, but there was a problem with this model,and it cann't work!
RuntimeError: The size of tensor a (32) must match the size of tensor b (237) at non-singleton dimension 1
i need your help!
so have you met the error like me?appreciate if you could give me some advice!thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants