Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning EfficientFormerV2-L #56

Open
ytring opened this issue Apr 17, 2023 · 1 comment
Open

Fine-tuning EfficientFormerV2-L #56

ytring opened this issue Apr 17, 2023 · 1 comment

Comments

@ytring
Copy link

ytring commented Apr 17, 2023

Hi,

First of all, thank you for the great work.

I've ran into an issue with fine-tuning. I've tried fine-tuning EfficientFormerV2-L with the --resume argument. However, I get the following error: Failed to find state_dict_ema, starting from loaded model weights when I launch training.

Can you please help me to resolve this error? Thank you in advance.

@ytring
Copy link
Author

ytring commented Apr 18, 2023

I've tried using the --finetune parameter instead: python -m torch.distributed.launch --nproc_per_node=$nGPUs --use_env main.py --model $MODEL --data-path /data/path --output_dir efficientformerv2_l_out --batch-size 32 --finetune $CKPT --distillation-type none. Now I am getting the following error:

Traceback (most recent call last):
  File "/data/ben/EfficientFormer/main.py", line 423, in <module>
    main(args)
  File "/data/ben/EfficientFormer/main.py", line 372, in main
    train_stats = train_one_epoch(
  File "/data/ben/EfficientFormer/util/engine.py", line 42, in train_one_epoch
    outputs = model(samples)
  File "/home/ben/anaconda3/envs/efficient_former/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/ben/anaconda3/envs/efficient_former/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 994, in forward
    if torch.is_grad_enabled() and self.reducer._rebuild_buckets():
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by 
making sure all `forward` function outputs participate in calculating loss. 
If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant