Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type promotion #110

Closed
lezcano opened this issue Apr 10, 2023 · 6 comments
Closed

Type promotion #110

lezcano opened this issue Apr 10, 2023 · 6 comments

Comments

@lezcano
Copy link
Collaborator

lezcano commented Apr 10, 2023

Ralf has insisted over and over that we implement NEP 50. I am not sure where our typecasting is at at the moment wrt. to NEP 50 though. Having a look at NEP 50, I see that in principle we should be mostly good, as one of the desiderata in NEP 50 is

This NEP proposes to refactor the behaviour around two guiding principles:

  1. Values must never influence result type.
  2. NumPy scalars and 0-D arrays should behave consistently with their N-D counterparts.

neither of which we do. That being said, we should have a look at the examples in NEP 50 and see how well we're currently doing though.

@lezcano lezcano changed the title Typecasting Type promotion Apr 10, 2023
@ev-br
Copy link
Collaborator

ev-br commented Apr 24, 2023

Took a look at NEP 50 examples, #125

Basically, array/array operations are OK, but mixing arrays and python scalars breaks down:

torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[uint8(1) + 2] FAILED                                    [  6%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1], uint8) + int64(1)] PASSED                    [ 13%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1], uint8) + array(1, int64)] PASSED             [ 20%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1.], float32) + float64(1.)] PASSED              [ 26%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1.], float32) + array(1., float64)] PASSED       [ 33%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1], uint8) + 1] FAILED                           [ 40%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1], uint8) + 200] FAILED                         [ 46%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([100], uint8) + 200] FAILED                       [ 53%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1], uint8) + 300] FAILED                         [ 60%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[uint8(1) + 300] FAILED                                  [ 66%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[uint8(100) + 200] FAILED                                [ 73%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[float32(1) + 3e100] FAILED                              [ 80%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([0.1], float32) == float64(0.1)] PASSED           [ 86%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1.], float32) + 3] FAILED                        [ 93%]
torch_np/tests/test_nep50_examples.py::test_nep50_exceptions[array([1.], float32) + int64(3)] PASSED                 [100%]

This is not entirely unexpected because we just wrap python scalars into 0D tensors and only deal with tensors.
So, properly implemeting NEP50 would involve introducing Union[Tensor, Number] in the wrapper. AFAIU dealing with tensors exclusively is better for TorchDynamo. Thoughts @lezcano @rgommers ?

@lezcano
Copy link
Collaborator Author

lezcano commented Apr 24, 2023

Note that we cannot implement without casting one way or another to 0D tensors, since NumPy accepts Python scalars almost everywhere, while PyTorch just accepts scalars on dunder methods. As such, it's impossible to implement np.cosh(2.4) on PyTorch without casting the arg to a 0D tensor.

I think we can deal with this at the decorator level tho. The idea here would be to process scalars after all the array-likes have been wrapped. Then, compute the result_type from them. Now, it's not clear to me whether this is exactly the algorithm that's described in NEP50. For example, if I give a list as an input, is it treated as a scalar or an array? I'd suspect it's an array, but I'm not sure. Also, what if all inputs are scalars? Is this just for binary ufuncs or for all the functions in PyTorch?

@rgommers where was this behaviour implemented in NumPy? We can probably adapt the logic from that PR to our wrapper.

@rgommers
Copy link
Member

It's hard to point at a single place. This was the main PR: numpy/numpy#21626.

I think we're in pretty decent shape though - some of these details may still change for NumPy 2.0, and it's still behind a switch. The key thing was to not emulate numpy's current set of casting rules. If the above are the only exceptions, we can probably punt on those "scalar + below-default-precision-arrays" cases now.

@lezcano
Copy link
Collaborator Author

lezcano commented Apr 24, 2023

Let's just add a point to the list of things that are not quite the same as in NumPy then

@ev-br
Copy link
Collaborator

ev-br commented Apr 25, 2023

Added a note to #73; also gh-125 adds a test with xfails.

@ev-br
Copy link
Collaborator

ev-br commented May 19, 2023

Done in gh-140

@ev-br ev-br closed this as completed May 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants