Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: __len__() should return >= 0 #949

Open
My12123 opened this issue Dec 31, 2024 · 0 comments
Open

ValueError: __len__() should return >= 0 #949

My12123 opened this issue Dec 31, 2024 · 0 comments

Comments

@My12123
Copy link

My12123 commented Dec 31, 2024

(venv) F:\FAST_Anime_VSR>call python main.py
We are going to process single videos located at ./i.mp4
Current supported input resolution for Super-Resolution is  defaultdict(<class 'list'>, {})
No such orginal resolution (1280X720) weight supported in current folder!
We are going to generate the weight now!!!
Size after crop is  (720, 1280, 3)
TensorRT weight Generator will process the image with height 720 and width 1280
Use float16 mode in TensorRT
Generating the TensorRT weight ........
[12/31/2024-15:40:50] [TRT] [E] 3: unet1.conv_bottom:0:DECONVOLUTION:GPU:kernel weights has count 3072 but 65536 was expected
[12/31/2024-15:40:50] [TRT] [E] 4: unet1.conv_bottom:0:DECONVOLUTION:GPU: count of 3072 weights in kernel, but kernel dimensions (4,4) with 64 input channels, 64 output channels and 1 groups were specified. Expected Weights count is 64 * 4*4 * 64 / 1 = 65536
Traceback (most recent call last):
  File "F:\FAST_Anime_VSR\main.py", line 107, in <module>
    main()
  File "F:\FAST_Anime_VSR\main.py", line 97, in main
    parallel_process(input_path, output_path, parallel_num=configuration.process_num)
  File "F:\FAST_Anime_VSR\process\single_video.py", line 212, in parallel_process
    model_full_name, model_partition_name = weight_justify(configuration, input_path)
  File "F:\FAST_Anime_VSR\process\single_video.py", line 92, in weight_justify
    generate_weight(h, w)               # It will automatically read model name we need
  File "F:\FAST_Anime_VSR\tensorrt_weight_generator\weight_generator.py", line 257, in generate_weight
    tensorrt_transform_execute(lr_h, lr_width)
  File "F:\FAST_Anime_VSR\tensorrt_weight_generator\weight_generator.py", line 206, in tensorrt_transform_execute
    ins.run()
  File "F:\FAST_Anime_VSR\tensorrt_weight_generator\weight_generator.py", line 160, in run
    self.weight_generate()
  File "F:\FAST_Anime_VSR\tensorrt_weight_generator\weight_generator.py", line 145, in weight_generate
    self.model_weight_transform(self.sample_input)
  File "F:\FAST_Anime_VSR\tensorrt_weight_generator\weight_generator.py", line 126, in model_weight_transform
    model_trt_model = torch2trt(generator, [input], fp16_mode=True)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch2trt-0.5.0-py3.10.egg\torch2trt\torch2trt.py", line 643, in torch2trt
    outputs = module(*inputs)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl
    return inner()
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1790, in inner
    result = forward_call(*args, **kwargs)
  File "F:\FAST_Anime_VSR\Real_CuGAN\cunet.py", line 20, in forward
    x2 = self.unet2(x1)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl
    return inner()
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1790, in inner
    result = forward_call(*args, **kwargs)
  File "F:\FAST_Anime_VSR\Real_CuGAN\cunet.py", line 170, in forward
    x2 = self.conv2(x2)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl
    return inner()
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1790, in inner
    result = forward_call(*args, **kwargs)
  File "F:\FAST_Anime_VSR\Real_CuGAN\cunet.py", line 73, in forward
    z = self.seblock(z)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1844, in _call_impl
    return inner()
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch\nn\modules\module.py", line 1790, in inner
    result = forward_call(*args, **kwargs)
  File "F:\FAST_Anime_VSR\Real_CuGAN\cunet.py", line 44, in forward
    x = torch.mul(x, x0)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch2trt-0.5.0-py3.10.egg\torch2trt\torch2trt.py", line 262, in wrapper
    converter["converter"](ctx)
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch2trt-0.5.0-py3.10.egg\torch2trt\converters\native_converters.py", line 1496, in convert_mul
    input_a_trt, input_b_trt = broadcast_trt_tensors(ctx.network, [input_a_trt, input_b_trt], len(output.shape))
  File "F:\FAST_Anime_VSR\venv\lib\site-packages\torch2trt-0.5.0-py3.10.egg\torch2trt\torch2trt.py", line 146, in broadcast_trt_tensors
    if len(t.shape) < broadcast_ndim:
ValueError: __len__() should return >= 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant