Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use torch.Tensor with unsigned types directly instead of TensorWrapper #5715

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

anmyachev
Copy link
Contributor

PyTorch 2.5 has a limited support for these types: https://github.com/pytorch/pytorch/blob/release/2.5/docs/source/tensors.rst. Considering that Triton uses PyTorch tensors mainly as a data container for transfer data to the kernel, this might be enough.

@@ -442,7 +442,7 @@ def do_test(x, y, kernel_fn):
# triton result
x_tri = x if x_is_scalar else to_triton(x, device=device, dst_type=dtype_x)
y_tri = y if y_is_scalar else to_triton(y, device=device, dst_type=dtype_y)
z_tri = to_triton(np.empty(SIZE, dtype=z_ref.dtype), device=device)
z_tri = to_triton(np.empty(SIZE, dtype=str(z_ref.dtype)), device=device)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid the issue: TypeError: can't convert np.ndarray of type numpy.ulonglong. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint64, uint32, uint16, uint8, and bool. String name for numpy.ulonglong type is uint64.

@anmyachev anmyachev marked this pull request as ready for review January 27, 2025 20:18
@anmyachev anmyachev requested a review from ptillet as a code owner January 27, 2025 20:18
@lezcano
Copy link
Contributor

lezcano commented Jan 27, 2025

This looks like an nfc but that would force us to depend on a rather new PyTorch. What's the upside of this change?

@anmyachev
Copy link
Contributor Author

This looks like an nfc but that would force us to depend on a rather new PyTorch. What's the upside of this change?

I did this from the point of view of removing code that is no longer relevant, to simplify code maintenance. I myself discovered this during debugging, when I saw that a tensor with a different type was being passed to the kernel and this was a bit unexpected.

BTW: It looks like these types have been supported since version 2.3:
https://github.com/pytorch/pytorch/blob/orig/release/2.3/docs/source/tensors.rst. Is this still a problem for internal testing? Potentially you may have testing with older versions of PyTorch, but I'm just not aware of it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants