Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[OCP MX Formats Specification](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf)

This library provides the capability to emulate MX-compatble formats
This library provides the capability to emulate MX-compatible formats
and bfloat quantization in pytorch, enabling data science exploration
for DNNs with different MX formats.
The underlying computations are done in float32/bfloat16/fp16 but with values restricted to
Expand Down Expand Up @@ -272,9 +272,9 @@ MX library functions. These are automatically JIT-compiled via
custom\_extensions.py

The following are some references for creating custom extensions for PyTorch:
* Custom C++ and CUDA Extension: https://pytorch.org/tutorials/advanced/cpp\_extension.html
* Tensor class: https://pytorch.org/cppdocs/api/classat\_1\_1\_tensor.html
* Tensor creation API: https://pytorch.org/cppdocs/notes/tensor\_creation.html
* Custom C++ and CUDA Extension: https://pytorch.org/tutorials/advanced/cpp_extension.html
* Tensor class: https://pytorch.org/cppdocs/api/classat_1_1_tensor.html
* Tensor creation API: https://pytorch.org/cppdocs/notes/tensor_creation.html

In the CUDA files, we subsitute the following MX terminology
as "block_size" already has a different meaning in CUDA:
Expand Down