Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add torch_np.linalg module #100

Merged
merged 8 commits into from
Apr 3, 2023
Merged

add torch_np.linalg module #100

merged 8 commits into from
Apr 3, 2023

Conversation

ev-br
Copy link
Collaborator

@ev-br ev-br commented Apr 1, 2023

Implement linalg mappings. This PR does nearly all of linalg (sans einsum). So far this is not completely consistent on

  • the type of exceptions: torch raises RuntimeErrors, sometimes torch._C._LinAlgErrors; we mostly pass the former through and convert the latter
  • zero-size arrays. NumPy handles them, but torch doesn't seem to. So just xfail the corresponding tests.

@ev-br ev-br changed the title WIP: add torch_np.linalg module add torch_np.linalg module Apr 2, 2023
torch_np/linalg.py Outdated Show resolved Hide resolved
@ev-br ev-br requested a review from lezcano April 2, 2023 13:14
lezcano
lezcano previously approved these changes Apr 3, 2023
torch_np/linalg.py Outdated Show resolved Hide resolved
torch_np/linalg.py Outdated Show resolved Hide resolved
torch_np/linalg.py Outdated Show resolved Hide resolved
Comment on lines 67 to 73
def inv(a: ArrayLike):
a = _atleast_float_1(a)
try:
result = torch.linalg.inv(a)
except torch._C._LinAlgError as e:
raise LinAlgError(*e.args)
return result
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to do this, let's either do it in its own wrapper (separate from normalizer not to clutter it) and wrap all the functions with it, or let's just never do it. FWIW, all the linalg functions may throw this error if there was an error on the computation.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we go down this route, it'll be a great way to also implement the _atelast... behaviour generically looking at the number of inputs of the function that happen to be tensors.

We could also add support for 0-dim inputs generically this way.

torch_np/linalg.py Outdated Show resolved Hide resolved
torch_np/linalg.py Show resolved Hide resolved
torch_np/linalg.py Outdated Show resolved Hide resolved
@lezcano lezcano self-requested a review April 3, 2023 11:12
Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, this looks good, but I didn't mean to approve it. Let's wait until the (minor) points are addressed and let's approve it then.

@lezcano lezcano dismissed their stale review April 3, 2023 11:14

didn't mean to approve

Copy link
Collaborator

@lezcano lezcano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! The 0-dim and the generic implementation can be done (or not, it's fairly low prio) in a different PR.

@ev-br ev-br merged commit a29c8da into linalg_tests Apr 3, 2023
@ev-br ev-br deleted the linalg branch April 3, 2023 12:53
@ev-br
Copy link
Collaborator Author

ev-br commented Apr 3, 2023

The 0-dim and the generic implementation can be done (or not, it's fairly low prio) in a different PR.

Added a line to the list at #73 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants