-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assorted fixes and simplifications #121
Conversation
lezcano
commented
Apr 20, 2023
- Give up trying to imitate the NumPy exceptions altogether
- Remove duplicated round, real imag
- Remove the implementation of nanmean, as it had a number of issues. If we implement masked reducitons, we should probably do them all in one go, reusing the aux functions we used for reductions.
- Minor cosmetic changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good modulo tri{l,u}_indices_from
semantic break. That one might be OK, too, just noting it to make sure we want to do it. ISTM the main usage of these is to actually index something with the result, so if we return a tensor, the next stage for a user is to swear and throw an explicit tuple()
call. So we're back to the same graph break and only earn a swear from a user?
result = torch.tril_indices(arr.shape[0], arr.shape[1], offset=k) | ||
return tuple(result) | ||
# Return a tensor rather than a tuple to avoid a graphbreak | ||
return torch.tril_indices(arr.shape[0], arr.shape[1], offset=k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a break w.r.t. numpy, so should be added to the list in #73 I guess.
Also this should fail on CI, so those tests need to be adjusted.
Semantics wise, should we not be worried about
In [40]: a = np.arange(6).reshape(3, 2)
In [41]: idx = np.tril_indices_from(a)
In [42]: a[idx]
Out[42]: array([0, 2, 3, 4, 5])
In [43]: a[np.asarray(idx)]
Out[43]:
array([[[0, 1],
[2, 3],
[2, 3],
[4, 5],
[4, 5]],
[[0, 1],
[0, 1],
[2, 3],
[0, 1],
[2, 3]]])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added to that issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ISTM the main usage of these is to actually index something with the result, so if we return a tensor, the next stage for a user is to swear and throw an explicit tuple() call. So we're back to the same graph break and only earn a swear from a user
As discussed in this example, we should be fine, as it's possible to index also with a tensor while preserving the semantics.
out.copy_(result) | ||
return result | ||
def nanmean(): | ||
raise NotImplementedError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, these all should go. This weirdness was to be able to port nanXXX tests, so should probbaly be redone in one go. This PR or a separate one, whichever works.
- Give up trying to imitate the NumPy exceptions altogether - Remove duplicated round, real imag - Remove the implementation of nanmean, as it had a number of issues. If we implement masked reducitons, we should probably do them all in one go, reusing the aux functions we used for reductions. - Minor cosmetic changes
Addressed the reviews. |
LGTM |