Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support adding tensor of different orders #36

Open
fredrikbk opened this issue Mar 19, 2017 · 5 comments
Open

Support adding tensor of different orders #36

fredrikbk opened this issue Mar 19, 2017 · 5 comments
Assignees

Comments

@fredrikbk
Copy link
Contributor

fredrikbk commented Mar 19, 2017

What is the most reasonable semantics for a(i) = b(i) + c? Does it mean:

  1. that we add c to every non-zero component in b, or
  2. that we add c to every component in b?

This is the same question as what is the most reasonable semantics for A(i,j) = B(i,j) + c(i).

If it is the latter, we need to stop handling scalars as a special case, to be ignored in the merge lattices and iteration schedules and simply multiplied in when computing. Rather we must merge with it, assuming it is a dense space.

@fredrikbk fredrikbk added the question Indicates that an issue, pull request, or discussion needs more information label Mar 19, 2017
@fredrikbk
Copy link
Contributor Author

fredrikbk commented Mar 19, 2017

Test case for the second semantics:

diff --git a/test/expr_storage-tests.cpp b/test/expr_storage-tests.cpp
index d7be97b..f164fac 100644
--- a/test/expr_storage-tests.cpp
+++ b/test/expr_storage-tests.cpp
@@ -185,6 +185,18 @@ INSTANTIATE_TEST_CASE_P(vector_scalar, expr,
                       }
                     },
                     {2.0, 4.0, 2.0, 2.0, 5.0}
+                    ),
+           TestData(Tensor<double>("a",{5},Format({Sparse})),
+                    {i},
+                    d5a("b",Format({Sparse}))(i) + da("c",Format())(),
+                    {
+                      {
+                        // Sparse index
+                        {0,4},
+                        {0,1,2,3,4}
+                      }
+                    },
+                    {2.0, 4.0, 2.0, 2.0, 5.0}
                     )
            )
 );

@shoaibkamil
Copy link
Contributor

This is how NumPy does it: https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html.

Matlab's latest release seems to have also adopted broadcasting: https://nickhigham.wordpress.com/2016/09/20/implicit-expansion-matlab-r2016b/

@stephenchouca
Copy link
Contributor

If we adopt the first semantics, then would that not imply that the result of the computation can change depending on which storage format is used to store b, even if b is unchanged mathematically? That doesn't seem very intuitive or reasonable in my opinion.

@fredrikbk fredrikbk modified the milestone: Beta release Mar 19, 2017
@fredrikbk
Copy link
Contributor Author

@stephenchouca that is a very good point and I think sufficient to adopt the second semantics.

@shoaibkamil The NumPy/Matlab documents don't seem to consider what happens if you're broadcasting over a sparse dimension, but it seems it is most natural broadcasting is over a dimension (semantics 2) and not over non-zero values (semantics 1). #6 also discusses broadcast semantics.

It seems like we all think the semantics 2 is correct.

@fredrikbk fredrikbk removed the question Indicates that an issue, pull request, or discussion needs more information label Mar 21, 2017
@fredrikbk fredrikbk self-assigned this Mar 21, 2017
@fredrikbk fredrikbk changed the title Adding tensor of different dimensions Support adding tensor of different orders Mar 21, 2017
@fredrikbk fredrikbk removed this from the Beta release milestone May 12, 2017
@fredrikbk
Copy link
Contributor Author

We should also support explicitly multiplying by a scalar/variable:

a(i) = 0.25 * b(i)

rohany added a commit that referenced this issue Oct 2, 2021
*: add ability to up front create partitions required for operations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants