-
Notifications
You must be signed in to change notification settings - Fork 131
Sketch of dim-ed tensors #407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Added a test |
return xtensor_constant(x, **kwargs) | ||
|
||
|
||
def as_xtensor_variable(x, name=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this have dims?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is analogous to as_tensor_variable
which doesn't allow you to pass constructor information. Right now only accepts xarrays or things that are already xtensor variables
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is an Op xtensor_from_tensor
that can be used to add dims to a vanilla tensor that only has shape
return xtensor_constant(x, **kwargs) | ||
|
||
|
||
def as_xtensor_variable(x, name=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would this have dims?
as_xtensor_variable([1, 2, 3], dims=("channel", ))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we can allow that
592c699
to
a3db755
Compare
2d908df
to
6e88797
Compare
2742594
to
71d31ba
Compare
Co-authored-by: Oriol Abril-Pla <[email protected]>
Is this the same checklist as a new backend? |
Slightly different, why? |
Curious |
@OriolAbril I moved to #1411 a pytensor branch so that PRs show up here instead of on my fork |
Moved to # #1411