Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update _torchscript_model_adapter.py #408

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

esgomezm
Copy link
Contributor

converting the input to the model to float to avoid numpy array errors from torch

@oeway

converting the input to the model to float to avoid numpy array errors
@FynnBe
Copy link
Member

FynnBe commented Aug 1, 2024

I don't think this should be hardcoded. Maybe a model expects a tensor to have an integer datatype (e.g. a mask or index input).
If a model needs float input then it should specify that for that tensor here (inputs[i].data = 'float32).
(Internally the ensure_dtype processing step is then used to cast if necessary after pre- or before postprocessing.

Copy link
Member

@FynnBe FynnBe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should not be hardcoded IMO

@oeway
Copy link
Contributor

oeway commented Aug 1, 2024

I agree it should not be directly converted to float32, however, it should not leave it as is either since pytorch does not support all numpy types, it seems all the unsigned types cause error, so we can either do float32 right now, or explicitly write a mapping for unsupported types.

@FynnBe
Copy link
Member

FynnBe commented Aug 5, 2024

But data.type is a required field. So there is always a known, fixed data type at this point in the code.
(field data_type for v0_4 models).
Could you please point me to the example where you encountered the issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants