Skip to content

Actions: NVIDIA/TransformerEngine

Documentation

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
3,534 workflow runs
3,534 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Enabling FP8 all-gather for TE Float8Tensor when using Torch FSDP2
Documentation #5861: Pull request #1358 synchronize by youngeunkwon0405
December 16, 2024 23:33 1m 0s youngeunkwon0405:fsdp2
December 16, 2024 23:33 1m 0s
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5860: Pull request #1369 synchronize by pre-commit-ci bot
December 16, 2024 13:13 1m 6s cyanguwa:swa_padding_brcm
December 16, 2024 13:13 1m 6s
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5858: Pull request #1369 synchronize by pre-commit-ci bot
December 16, 2024 13:04 Action required cyanguwa:swa_padding_brcm
December 16, 2024 13:04 Action required
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5854: Pull request #1369 synchronize by pre-commit-ci bot
December 16, 2024 09:26 Action required cyanguwa:swa_padding_brcm
December 16, 2024 09:26 Action required
[common/PyTorch] Add FusedAttention support for SWA (left, right)
Documentation #5853: Pull request #1369 synchronize by cyanguwa
December 16, 2024 09:25 1m 2s cyanguwa:swa_padding_brcm
December 16, 2024 09:25 1m 2s
[JAX] Fused attention unit tests fixes and refinements
Documentation #5852: Pull request #1352 synchronize by zlsh80826
December 16, 2024 05:53 58s zlsh80826:rewang/fa-refactor
December 16, 2024 05:53 58s
[JAX] Bug Fix: Softmax FFIs with correct Encapsulates
Documentation #5851: Pull request #1375 synchronize by phu0ngng
December 14, 2024 17:07 1m 2s phu0ngng:bugfix_ffi_softmax
December 14, 2024 17:07 1m 2s
[PyTorch] Fix get_swa_mask() for padding masks
Documentation #5848: Pull request #1281 synchronize by cyanguwa
December 14, 2024 09:24 55s cyanguwa:fix_swa_mask
December 14, 2024 09:24 55s
[PyTorch] Fix get_swa_mask() for padding masks
Documentation #5846: Pull request #1281 synchronize by cyanguwa
December 14, 2024 09:23 56s cyanguwa:fix_swa_mask
December 14, 2024 09:23 56s
[PyTorch] Fix get_swa_mask() for padding masks
Documentation #5845: Pull request #1281 synchronize by cyanguwa
December 14, 2024 00:25 1m 4s cyanguwa:fix_swa_mask
December 14, 2024 00:25 1m 4s
[JAX] Bug Fix: Softmax FFIs with correct Encapsulates
Documentation #5844: Pull request #1375 synchronize by phu0ngng
December 13, 2024 15:54 58s phu0ngng:bugfix_ffi_softmax
December 13, 2024 15:54 58s
[MoE][PyTorch] Add mask-based MoE permutation
Documentation #5841: Pull request #1373 synchronize by hxbai
December 13, 2024 06:05 1m 3s hxbai:permute_fusion
December 13, 2024 06:05 1m 3s
[MoE][PyTorch] Add mask-based MoE permutation
Documentation #5840: Pull request #1373 synchronize by hxbai
December 13, 2024 05:53 1m 4s hxbai:permute_fusion
December 13, 2024 05:53 1m 4s