Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seeking Guidance: Addressing Performance-Related Warning Messages to Optimize Execution Speed #329

Open
eanzero opened this issue Sep 25, 2024 · 3 comments

Comments

@eanzero
Copy link

eanzero commented Sep 25, 2024

Thank you for taking the time to review my question.

Before I proceed, I would like to mention that I am a beginner, and I would appreciate your consideration of this fact.

I am seeking assistance in resolving the following warnings to improve execution speed. While I am able to obtain results, I receive the warning messages listed below. From my research, I understand that these warnings can affect execution speed, but I have been unable to find a solution, hence my question.

C:\Users\USER\ddd\segment-anything-2\sam2\modeling\backbones\hieradet.py:68: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
x = F.scaled_dot_product_attention(
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Memory efficient kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:723.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Memory Efficient attention has been runtime disabled. (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen/native/transformers/sdp_utils_cpp.h:495.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:725.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: CuDNN attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:727.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: The CuDNN backend needs to be enabled by setting the enviornment variableTORCH_CUDNN_SDPA_ENABLED=1 (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:497.)
out = F.scaled_dot_product_attention(q, k, v, dropout_p=dropout_p)
C:\Users\USER\anaconda3\envs\ddd\Lib\site-packages\torch\nn\modules\module.py:1562: UserWarning: Flash Attention kernel failed due to: No available kernel. Aborting execution.
Falling back to all available kernels for scaled_dot_product_attention (which may have a slower speed).
return forward_call(*args, **kwargs)

My execution environment is as follows:

  • Docker
  • PyTorch 2.4.0
  • CUDA 12.4
  • GPU: RTX 3070 (Memory: 8.0G)

The CUDA environment on the host machine is:
Cuda compilation tools, release 12.5, V12.5.82 Build cuda_12.5.r12.5/compiler.34385749_0

I would greatly appreciate any guidance on how to address these warnings. Thank you in advance for your help.

@dario-spagnolo
Copy link

I am getting the same warnings.

My environment :

  • Python 3.10.12
  • PyTorch 2.4.1
  • CUDA 12.4
  • GPU : NVIDIA L4 (24 GB)

@renhaa
Copy link

renhaa commented Oct 2, 2024

Same for me on

  • python 3.11
  • pytorch 2.4.0
  • cuda 12.4.1
  • Nvidia A40 GPU (48GB)

@ronghanghu
Copy link
Contributor

Hi @eanzero @dario-spagnolo @renhaa, you can turn off this warning by changing the line

OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
to

OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True

This would directly try out all the available kernels (instead of trying Flash Attention first and then falling back to other kernels upon errors).


@eanzero The error message above shows that the Flash Attention kernel failed

C:\Users\USER\ddd\segment-anything-2\sam2\modeling\sam\transformer.py:270: UserWarning: Flash attention kernel not used because: (Triggered internally at C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:725.)

but PyTorch didn't print a further line explaining why it failed. Meanwhile, the GPU you're using (RTX 3070) has a CUDA compute capability of 8.6 according to https://developer.nvidia.com/cuda-gpus, so it should support Flash Attention in principle.

A possible cause is that there could be some mismatch between your CUDA driver, CUDA runtime, and PyTorch versions, causing Flash Attention kernels to fail, especially given that you're using Windows. Previously people have reported issues with Flash Attention on Windows (e.g. in pytorch/pytorch#108175 and Dao-AILab/flash-attention#553), and it could be the same issue in your case. To avoid these issues, it's recommended to use Windows Subsystem for Linux if you're running on Windows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants