You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When reproducing the REDS datasets, I reduced the num_input_frames=20 to fit the GPU memory, so the training is fine, 8G is used for every 16G GPU.
But when it's evaluating, the GPU consumed by the training iteration seems not released. So the evaluation encounters the OOM as RuntimeError: CUDA out of memory. Tried to allocate 436.00 MiB (GPU 1; 15.90 GiB total capacity; 10.98 GiB already allocated; 189.81 MiB free; 14.78 GiB reserved in total by PyTorch)
The config file for the data is as below: data = dict( workers_per_gpu=4, train_dataloader=dict(samples_per_gpu=1, drop_last=True), test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1),
BTW, similar problem was reported in mmdet but no feasible solutions was found.
The text was updated successfully, but these errors were encountered:
When reproducing the REDS datasets, I reduced the num_input_frames=20 to fit the GPU memory, so the training is fine, 8G is used for every 16G GPU.


But when it's evaluating, the GPU consumed by the training iteration seems not released. So the evaluation encounters the OOM as
RuntimeError: CUDA out of memory. Tried to allocate 436.00 MiB (GPU 1; 15.90 GiB total capacity; 10.98 GiB already allocated; 189.81 MiB free; 14.78 GiB reserved in total by PyTorch)
The config file for the data is as below:
data = dict( workers_per_gpu=4, train_dataloader=dict(samples_per_gpu=1, drop_last=True), test_dataloader=dict(samples_per_gpu=1, workers_per_gpu=1),
BTW, similar problem was reported in mmdet but no feasible solutions was found.
The text was updated successfully, but these errors were encountered: