-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unsupported NumPy features and other differences w.r.t. NumPy #73
Comments
This is something that |
Thanks, edited! Have to admit, I'm not up-to-date on various |
Is there any test case like:
That requires |
I'm not quite sure what is the desired behavior in your snippet above if On a tangentially related note, is there a way to control how numpy arrays and our wrapper arrays mix:
|
The exact same - you either get a
There is, e.g. through implementing |
Re your example. Is the expected behavior python/numpy/pytorch version dependent? (Sincerely hope it shouldn't be!). Here's what I get locally with
In case it matters,
|
No, I just edited the wrong line during our call. Fixed now by swapping |
The current behavior is that the wrapper ndarray wins over in both
Here I think what happens is that |
…tem__" In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
…tem__" In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
…tem__" In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
…tem__" In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix cc mruberry rgommers voznesenskym penguinwu anijain2305 EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx chenyang78 aakhundov [ghstack-poisoned]
In this case, we copy, but this is part of the set of divergences described in Quansight-Labs/numpy_pytorch_interop#73. This does not work with dynamic shapes, but it's not clear to me what would be the best fix Pull Request resolved: #107688 Approved by: https://github.com/ezyang ghstack dependencies: #107687
We only aim to support numeric dtypes, which are undestood by pytorch. This rules out
np.longdouble
andnp.clongdouble
(a.k.a.np.float128
andnp.complex256
, respectively)ndrray subclasses are out of scope.
masked arrays are out of scope
numpy polynomials are out of scope, both
np.poly1d
,np.polynomial
__array_function__
protocol is out of scope. This way, non-defaultlike=...
arguments raise.__array_interface__
is out of scopendarray.ctypes
attribute not supportedNegative strides:
tnp.flip
and slicing with negative step return a copy.These differences exist currently, but might be fixable if desired:
We do not distinguish between 0D arrays and scalars. That is,
tnp.float32(3)
creates a zero-dim array.In our implementation,
np.int32(2)
behaves identically tonp.asarray(2)
.We do not implement value-based casting. This will be deprecated in NumPy 2.0 as per NEP 50.
__array_wrap__
protocol is currently not implemented.gufunc machinery is not implemented, e.g.
axes=[(n,k),(k,m)->(n,m)]
arguments of ufunc objects.ufunc methods (
np.add.reduce
etc) are not implemented.Fortran ordered arrays in general, and
order="CFKA"
in various creation functions are not implemented.numpy.linalg
, handles zero-size arrays (sort of) uniformly, and pytorch doesn't handle these at all. We do not currently implement it.various estimators for the
np.histogram
bin selection are not implemented.nout=2
ufuncsout1=..., out2=...
positional arguments do not work (out=tuple kwargs work)sorting/ordering of complex data: numpy defines some ordering of complex values, pytorch errors out; we follow pytorch; relevant functions are
min/max
,argmin/argmax
,sort
andsearchsorted
. cf min/max for complex inputs #67 for discussion.tril_indices_from
/triu_indices_from
return tensors rather than list of tuples to avoid a graph breakThe text was updated successfully, but these errors were encountered: