Skip to content

Sketch of dim-ed tensors #407

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

ricardoV94
Copy link
Member

@ricardoV94 ricardoV94 commented Aug 2, 2023

Moved to # #1411

@ricardoV94 ricardoV94 changed the title Sketch of named tensors Sketch of dimmed tensors Aug 3, 2023
@ricardoV94 ricardoV94 changed the title Sketch of dimmed tensors Sketch of dim-ed tensors Aug 3, 2023
@ricardoV94
Copy link
Member Author

Added a test

return xtensor_constant(x, **kwargs)


def as_xtensor_variable(x, name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this have dims?

Copy link
Member Author

@ricardoV94 ricardoV94 Jul 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is analogous to as_tensor_variable which doesn't allow you to pass constructor information. Right now only accepts xarrays or things that are already xtensor variables

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an Op xtensor_from_tensor that can be used to add dims to a vanilla tensor that only has shape

return xtensor_constant(x, **kwargs)


def as_xtensor_variable(x, name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would this have dims?

as_xtensor_variable([1, 2, 3], dims=("channel", ))

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes we can allow that

@ricardoV94 ricardoV94 force-pushed the named_tensor branch 3 times, most recently from 592c699 to a3db755 Compare July 23, 2024 11:21
@ricardoV94 ricardoV94 force-pushed the named_tensor branch 2 times, most recently from 2d908df to 6e88797 Compare May 21, 2025 12:59
@ricardoV94 ricardoV94 added enhancement New feature or request help wanted Extra attention is needed labels May 21, 2025
@ricardoV94 ricardoV94 force-pushed the named_tensor branch 6 times, most recently from 2742594 to 71d31ba Compare May 22, 2025 12:27
Co-authored-by: Oriol Abril-Pla <[email protected]>
@williambdean
Copy link
Contributor

Is this the same checklist as a new backend?

@ricardoV94
Copy link
Member Author

Is this the same checklist as a new backend?

Slightly different, why?

@williambdean
Copy link
Contributor

Curious

@ricardoV94
Copy link
Member Author

@OriolAbril I moved to #1411 a pytensor branch so that PRs show up here instead of on my fork

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
Development

Successfully merging this pull request may close these issues.

2 participants