Skip to content

Commit

Permalink
Sparse Attention: Fix Triton errors (deepspeedai#1608)
Browse files Browse the repository at this point in the history
  • Loading branch information
pwstegman authored Dec 2, 2021
1 parent 4b854a3 commit cda7c71
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion docs/_tutorials/sparse-attention.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: "DeepSpeed Sparse Attention"

In this tutorial we describe how to use DeepSpeed Sparse Attention (SA) and its building-block kernels. The easiest way to use SA is through DeepSpeed launcher. We will describe this through an example in [How to use sparse attention with DeepSpeed launcher](#how-to-use-sparse-attention-with-deepspeed-launcher) section. But before that, we introduce modules provided by DeepSpeed SA in the [next](#sparse-attention-modules) section.

**Note:** Currently DeepSpeed Sparse Attention can be used only on NVIDIA V100 GPU using Torch >= 1.5 and Cuda 10.1 or 10.2.
**Note:** Currently, DeepSpeed Sparse Attention can be used only on NVIDIA V100 or A100 GPUs using Torch >= 1.6 and CUDA 10.1, 10.2, 11.0, or 11.1.
{: .notice--warning}

## Sparse attention modules
Expand Down
2 changes: 1 addition & 1 deletion requirements/requirements-sparse_attn.txt
Original file line number Diff line number Diff line change
@@ -1 +1 @@
triton
triton==1.0.0

0 comments on commit cda7c71

Please sign in to comment.