Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLIP2vec encode_image error #97

Open
ilia10000 opened this issue Apr 25, 2022 · 0 comments
Open

CLIP2vec encode_image error #97

ilia10000 opened this issue Apr 25, 2022 · 0 comments

Comments

@ilia10000
Copy link

When I try the listed example for get image vectors from CLIP:

from vectorhub.bi_encoders.text_image.torch import Clip2Vec
model = Clip2Vec()
model.encode_image('https://getvectorai.com/assets/hub-logo-with-text.png')

I get the following trace:

/home/is2961/anaconda3/lib/python3.9/site-packages/vectorhub/base.py:62: UserWarning: Unable to encode. Filling in with dummy vector.
  warnings.warn("Unable to encode. Filling in with dummy vector.")
Traceback (most recent call last):
  File "/home/is2961/anaconda3/lib/python3.9/site-packages/vectorhub/base.py", line 42, in catch_vector
    return func(*args, **kwargs)
  File "/home/is2961/anaconda3/lib/python3.9/site-packages/vectorhub/bi_encoders/text_image/torch/clip.py", line 101, in encode_image
    return self.model.encode_image(image).detach().numpy().tolist()[0]
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
  File "code/__torch__/multimodal/model/multimodal_transformer/___torch_mangle_9591.py", line 19, in encode_image
    _0 = self.visual
    input = torch.to(image, torch.device("cuda:0"), 5, False, False, None)
    return (_0).forward(input, )
            ~~~~~~~~~~~ <--- HERE
  def encode_text(self: __torch__.multimodal.model.multimodal_transformer.___torch_mangle_9591.Multimodal,
    input: Tensor) -> Tensor:
  File "code/__torch__/multimodal/model/multimodal_transformer.py", line 20, in forward
    _4 = self.positional_embedding
    _5 = self.class_embedding
    _6 = (self.conv1).forward(input, )
          ~~~~~~~~~~~~~~~~~~~ <--- HERE
    _7 = ops.prim.NumToTensor(torch.size(_6, 0))
    _8 = int(_7)
  File "code/__torch__/torch/nn/modules/conv/___torch_mangle_9366.py", line 8, in forward
  def forward(self: __torch__.torch.nn.modules.conv.___torch_mangle_9366.Conv2d,
    input: Tensor) -> Tensor:
    x = torch._convolution(input, self.weight, None, [32, 32], [0, 0], [1, 1], False, [0, 0], 1, False, False, True, True)
        ~~~~~~~~~~~~~~~~~~ <--- HERE
    return x
  def forward1(self: __torch__.torch.nn.modules.conv.___torch_mangle_9366.Conv2d,

Traceback of TorchScript, original code (most recent call last):
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py(420): _conv_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/conv.py(423): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(85): forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/root/workspace/multimodal-pytorch/multimodal/model/multimodal_transformer.py(221): visual_forward
/opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py(940): trace_module
<ipython-input-1-40b054242c5d>(36): export_torchscript_models
<ipython-input-2-808c11c4d1cf>(3): <module>
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3418): run_code
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3338): run_ast_nodes
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3147): run_cell_async
/opt/conda/lib/python3.7/site-packages/IPython/core/async_helpers.py(68): _pseudo_sync_runner
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2923): _run_cell
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2878): run_cell
/opt/conda/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py(555): interact
/opt/conda/lib/python3.7/site-packages/IPython/terminal/interactiveshell.py(564): mainloop
/opt/conda/lib/python3.7/site-packages/IPython/terminal/ipapp.py(356): start
/opt/conda/lib/python3.7/site-packages/traitlets/config/application.py(845): launch_instance
/opt/conda/lib/python3.7/site-packages/IPython/__init__.py(126): start_ipython
/opt/conda/bin/ipython(8): <module>
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [768, 3, 32, 32], but got 5-dimensional input of size [1, 1, 3, 224, 224] instead

Is there an easy fix for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant