Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Encountering errors when running llara on multiple GPUs #7

Open
erjiaxiao opened this issue Sep 1, 2024 · 2 comments
Open

Encountering errors when running llara on multiple GPUs #7

erjiaxiao opened this issue Sep 1, 2024 · 2 comments

Comments

@erjiaxiao
Copy link

Hello @LostXine, when running llara on multiple GPUs, I encountered the following error:

Exception has occurred: RuntimeError
CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
  File "/home/lsf_storage/homes/ch/claude/LLaRA/train-llava/llava/model/language_model/llava_llama.py", line 92, in forward
    return super().forward(
  File "/home/ch/claude/LLaRA/eval/llara_adv_attack.py", line 235, in model_generation
    outputs = model(**inputs)
  File "/home/ch/claude/LLaRA/eval/llara_adv_attack.py", line 526, in query_bc
    ans, _ , i = model_generation(tokenizer, model, image_processor, image_list, prepared_prompt)
  File "/home/ch/claude/LLaRA/eval/llara_adv_attack.py", line 436, in eval_episode
    paresed_action, prepared_prompt, ans, image = gen_action(tokenizer, model, image_processor,
  File "/home/ch/claude/LLaRA/eval/llara_adv_attack.py", line 567, in <module>
    eval_episode(args, query_bc, parse_bc)
RuntimeError: CUDA error: device-side assert triggered

However, everything works fine when I run llara on a single GPU. Are there any specific configurations required for multiple GPU usage?

@LostXine
Copy link
Owner

LostXine commented Sep 1, 2024

Hi @erjiaxiao ,

I did not try to run llara on multiple gpus for inference. The error log hints at some compatibility issues or hardware config issues but I'm not 100% sure. Could you confirm you are using the same version of some important packages (i.e. torch, cuda,..) as llava? I would like to test multiple GPU inference as well but unfortunately, I'm on travel now. I will try to get back to you before next weekend.
Thank you for your understanding.

Best,

@erjiaxiao
Copy link
Author

OK, Thank you! I will take a look at the problem. Have a good trip!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants