-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add torch_np.linalg module #100
Conversation
torch_np/linalg.py
Outdated
def inv(a: ArrayLike): | ||
a = _atleast_float_1(a) | ||
try: | ||
result = torch.linalg.inv(a) | ||
except torch._C._LinAlgError as e: | ||
raise LinAlgError(*e.args) | ||
return result |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to do this, let's either do it in its own wrapper (separate from normalizer not to clutter it) and wrap all the functions with it, or let's just never do it. FWIW, all the linalg functions may throw this error if there was an error on the computation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go down this route, it'll be a great way to also implement the _atelast...
behaviour generically looking at the number of inputs of the function that happen to be tensors.
We could also add support for 0-dim inputs generically this way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, this looks good, but I didn't mean to approve it. Let's wait until the (minor) points are addressed and let's approve it then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! The 0-dim and the generic implementation can be done (or not, it's fairly low prio) in a different PR.
Added a line to the list at #73 (comment) |
Implement linalg mappings. This PR does nearly all of linalg (sans einsum). So far this is not completely consistent on
torch._C._LinAlgError
s; we mostly pass the former through and convert the latter