Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eigen expressions #23

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

t4c1
Copy link
Contributor

@t4c1 t4c1 commented May 22, 2020

This is already work in progress. Anyway @bbbales2 asked me to write this and I agree it would be benefitial to have this written somewhere.

Pinging some people that might be interested in looking this over: @bbbales2 @andrjohns @SteveBronder

Copy link
Member

@bbbales2 bbbales2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments! THanks!

# Reference-level explanation
[reference-level-explanation]: #reference-level-explanation

Some expressions can be very expensive to compute (for example matrix multiplication), so they should never be evaluated more than once. So if an expression argument is used more than once in a function, it should be evaluated first and than the computed value should be used. On the other hand some expressions are trivial to compute (for example block). For these it would be better if they are not evaluated, as that would make a copy of underlying data. Eigen solved this dilemma by introducing `Eigen::Ref` class. If a trivial expression is assigned to `Ref` it is just referenced by it. If the expression involves actual computations assigning it to `Ref` evaluates it and stores the result in the `Ref`. In either case `Ref` than can be used more or less the same as a matrix.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the difference in an Eigen::Ref and an expression? Are Eigen::Ref s expressions?

Copy link
Contributor Author

@t4c1 t4c1 May 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is any mathematically exact definition of what is expression. For the functions accepting expressions it is. So are Eigen::Matrix and Eigen::Map.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh so Eigen::Ref will hold things like "this is the sum of two matrices, A, and B".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sum is not trivial. So Ref would evaluate it and hold the result.

# Unresolved questions
[unresolved-questions]: #unresolved-questions

I can't think of any right now.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add an example of a function that used to take an Eigen::Matrix<T, R, C> and return an Eigen::Matrix<T, R, C> but now it takes in an expression and possibly returns one too?

Is this a simple conversion?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be returning auto? Or returning everything through one of those Ref things?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be using those universal reference things @SteveBronder does? Or should we be doing Eigen::MatrixBase<Derived>?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be returning auto? Or returning everything through one of those Ref things?

Right now we still need to return matrix from functions exposed to lang. Later we will be returning auto rfom most functions (or if you don't like auto you can specify exact type of the expression).

Should we be using those universal reference things @SteveBronder does?

He tends to use them everywhere, including where they ofer no benefit, just make code more complicated. In most places not. In general it has nothing to do with expressions. If you are interested in more details about that look up perfect forwarding.

Or should we be doing Eigen::MatrixBase?

We could use that. Just than we would have to call .derived() on every argument. It is simpler to use plain templates and restrict input types with requires.

Copy link
Contributor Author

@t4c1 t4c1 May 22, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added an example. If you are interested in more examples or in more complex functions, see what heas already been done under issue stan-dev/math#1470.

# Unresolved questions
[unresolved-questions]: #unresolved-questions

I can't think of any right now.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add Rok's example about stanc3 somewhere here. Like:

matrix[N, N] A = B * C;
matrix[N, N] D = A * C + A * B;

Still works as expected in stan as long as the compiler makes A a full type that causes the copy to happen.

# Drawbacks
[drawbacks]: #drawbacks

Code complexity increases. Performance affecting bugs are more likely due to some input expression being evaluated multiple times.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, somehow the drawback is that bugs keep popping up in unexpected places.

I guess if the code magically got more complex to handle that, that's no problem. It seems like the problem is the code doesn't magically get more complicated and we get bugs lol.

Anyway I'm not really adding anything here just thinking out loud.

# Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

- Only alternative I know of is not changing anything and still evaluatign expressions everywhere.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this should just be The alternative is to stay with non-expression types. The downside to this is the extra time spend loading/storing matrices in simple operations..

# Unresolved questions
[unresolved-questions]: #unresolved-questions

I can't think of any right now.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What meta-programs do we need to support this?

If I have an expression type T, how do I get:

  1. The non-expression type
  2. The scalar type
  3. is_var, etc.

Or maybe all the regular programs for 2, 3, etc. work and we just need a new one for 1? (or I take it plain_type_t solves 1)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(or I take it plain_type_t solves 1)?

Exactly.

For 2 and 3 we need nothing new. What we already have works with expressions.

@andrjohns
Copy link
Contributor

It might also be worth mentioning that we have to be more aware of scoping with returning expressions. Like with [https://github.com/stan-dev/math/issues/1775](this issue) where objects falling out of scope can cause the function to silently return the wrong result

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 17, 2020

Can this be merged or should I add anything else?

@bbbales2
Copy link
Member

@t4c1 I don't think we should close it, but I also don't want to get in the way of things. The problem with development here is it's going into the Math library in tiny pieces, and so it is hard for me to track.

I haven't been able to spend time on the stuff you're doing like I'd like to to understand it, but my main concern with this stuff is (and I know you share these concerns) it keeps breaking in unexpected ways (that thing @andrjohns provided being the last example).

So the holder pull request was a response to this (stan-dev/math#1914) and that's great and all, but there still might be something else.

In particular, the pull requests coming down the line from this: https://github.com/stan-dev/math/pull/1945/files

I think the thing that we're missing here is that we should be testing all our matrix functions with matrix expressions, and we should be testing anything that can return a matrix expression in a special way as well. The only way this isn't impossibly complicated is going through test framework stuff.

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 18, 2020

it keeps breaking in unexpected ways

I am sorry for that, but the things I am changing touch practically entire library. I don't have perfect understanding of all the details and there are too many to keep in mind at once. I don't have a solution here. All I can say that I am confident that I can fix whatever I break.

I think the thing that we're missing here is that we should be testing all our matrix functions with matrix expressions, and we should be testing anything that can return a matrix expression in a special way as well. The only way this isn't impossibly complicated is going through test framework stuff.

I havent talked on the forums about that much, but I am aware of that issue. I fully plan on testing that every function works with expressions before that is used in Stan (via functions returning expressions).

My idea so far is to add (yet another) testing framework. It will automatically generate tests for all functions. Probably by parsing the list of function signatures in Stan (https://raw.githubusercontent.com/stan-dev/stanc3/master/test/integration/signatures/stan_math_sigs.expected). As far as I know this is fine for prim and rev instantinations, but I am not sure about fwd. I will tackle this problem when I get there. All functions will be tested that they can be instantiated with expressions and that results with expressions match ones that are calculated from plain matrices. I think we could also test that functions evaluate each expression only once.

Anyway my plan so far is to start working on that once all functions are generalized to accept expressions.

@bbbales2
Copy link
Member

I am sorry for that, but the things I am changing touch practically entire library

Yeah I'm sorry I can't stay focused on this long enough to keep checking it lol. It's like an organizational problem of how complex things like this get in -- not an individual thing.

Anyway my plan so far is to start working on that once all functions are generalized to accept expressions.

That's backwards, right? Just do the tests first, and then it's easy to accept functions or not. If we do functions first, we're in a constant-nail biting situation of not knowing if things will work.

I think these tests can all exist at the math level. We just need to pass in expressions, let those expressions go out of scope, call gradients, and make sure we're getting the things we expect and no segfaults.

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 18, 2020

I think these tests can all exist at the math level.

I agree that would be preferable. What I am missing (or does it exist somewhere?) at the math level it a list of all signatures we are supporting (especially for distributions that is not even obvious when looking at code). I like the linked file because is part of compiler and will be updated when a new signature is added.

That's backwards, right?

Yeah, it is. If I add some automatic testing now it will fail for all functions I have not touched yet. I would really like to avoid listing all signatures it should test by hand, as that approach is prone to mistakes. I you have a good idea how to add it now, I am open to suggestions.

@bbbales2
Copy link
Member

at the math level it a list of all signatures we are supporting

Well that's where the testing frameworks area handy -- I think you'd just use them, and then it's up to Math as a repository to make sure everything is tested. Like here: https://github.com/stan-dev/math/blob/develop/test/unit/math/mix/fun/determinant_test.cpp#L40

The input arguments are matrices and so certain things happen. One of those certain things would now be the input arguments provided as expressions (maybe add and subtract the identity matrix or something).

The difficult thing is right now the test framework for distributions is in test/prob. We could add some expression tests there, or add basic expect_ad style distribution tests in test/math/mix for all the prob functions. The second is probably harder.

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 18, 2020

AFIK expect_ad is not used on tests of all functions. The framework for distributions only tests distributions. Here I would need to test all functions in stan math that can accept expressions

@bbbales2
Copy link
Member

Right but expressions only come up with matrix/vector/row_vector types. All functions that use matrices should be tested with expect_ad.

AFIK expect_ad is not used on tests of all functions.

I think at this point, if they don't, they're either one of the specially licensed-non-high-order-autodiff-functions (algebra_solver, integrate_ode_*, integrate_1d), or it's a bug.

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 18, 2020

I amde a python script that searches mix tests of functions:

import os

fun_folder = r"F:\Users\tadej\Programming\bstatcomp\stan-math\stan\math\prim\fun"
funs = os.listdir(fun_folder)
test_folder = r"F:\Users\tadej\Programming\bstatcomp\stan-math\test\unit\math\mix\fun"
tests = os.listdir(test_folder)

all_mat = []
no_test = []
no_expect_ad=[]
for fun in funs:
    fun_code = open(fun_folder + "\\" + fun).read()
    if "Eigen" not in fun_code:
        continue
    all_mat.append(fun)
    for test in tests:
        if fun[:-4] == test[:-9]:
            test_code = open(test_folder + "\\" + test).read()
            if "expect_ad" not in test_code and \
                    "stan::test::expect" not in test_code:
                no_expect_ad.append(fun)
            break
    else:
        no_test.append(fun)

print("fine", len(all_mat)-len(no_test)-len(no_expect_ad))
print("no_test", len(no_test),  no_test)
print("no_expect_ad", len(no_expect_ad), no_expect_ad)

Here are the results:

fine 121
no_test 66 ['autocorrelation.hpp', 'autocovariance.hpp', 'chol2inv.hpp', 'cholesky_corr_free.hpp', 'cholesky_factor_constrain.hpp', 'cholesky_factor_free.hpp', 'corr_matrix_free.hpp', 'cov_matrix_constrain_lkj.hpp', 'cov_matrix_free.hpp', 'cov_matrix_free_lkj.hpp', 'csr_extract_u.hpp', 'csr_extract_v.hpp', 'csr_extract_w.hpp', 'csr_matrix_times_vector.hpp', 'csr_to_dense_matrix.hpp', 'divide_columns.hpp', 'Eigen.hpp', 'factor_cov_matrix.hpp', 'factor_U.hpp', 'get.hpp', 'gp_dot_prod_cov.hpp', 'gp_exponential_cov.hpp', 'gp_exp_quad_cov.hpp', 'gp_matern32_cov.hpp', 'identity_matrix.hpp', 'LDLT_factor.hpp', 'linspaced_array.hpp', 'linspaced_row_vector.hpp', 'linspaced_vector.hpp', 'log_mix.hpp', 'make_nu.hpp', 'MatrixExponential.h', 'matrix_exp_action_handler.hpp', 'max_size_mvt.hpp', 'num_elements.hpp', 'ones_row_vector.hpp', 'ones_vector.hpp', 'one_hot_row_vector.hpp', 'one_hot_vector.hpp', 'ordered_free.hpp', 'positive_ordered_free.hpp', 'promote_elements.hpp', 'promote_scalar.hpp', 'read_corr_L.hpp', 'read_corr_matrix.hpp', 'read_cov_L.hpp', 'read_cov_matrix.hpp', 'simplex_free.hpp', 'size_mvt.hpp', 'sort_asc.hpp', 'sort_desc.hpp', 'sort_indices_asc.hpp', 'sort_indices_desc.hpp', 'to_array_1d.hpp', 'to_array_2d.hpp', 'to_matrix.hpp', 'to_ref.hpp', 'to_row_vector.hpp', 'to_vector.hpp', 'typedefs.hpp', 'uniform_simplex.hpp', 'unit_vector_free.hpp', 'welford_covar_estimator.hpp', 'welford_var_estimator.hpp', 'zeros_row_vector.hpp', 'zeros_vector.hpp']
no_expect_ad 18 ['accumulator.hpp', 'append_array.hpp', 'assign.hpp', 'cols.hpp', 'cov_exp_quad.hpp', 'dims.hpp', 'fill.hpp', 'get_base1.hpp', 'get_base1_lhs.hpp', 'gp_periodic_cov.hpp', 'initialize.hpp', 'resize.hpp', 'rows.hpp', 'size.hpp', 'sort_indices.hpp', 'stan_print.hpp', 'value_of.hpp', 'value_of_rec.hpp']

Many of these functions should be tested with expressions, so we can not rely on expect_ad.

@bbbales2
Copy link
Member

Good script. You're right that there are functions here that need tricky-ness to handle expressions, but wouldn't need high order autodiff tested.

dim, for instance.

Well since we can't lean on that, I guess we should just be adding tests as we go? Given the pull requests are on distributions, could we add something to test/prob?

I guess also, does anything other than value_of return an expression right now?

This is probably in the design doc, but the way this is working is everything will be able to accept an expression first before functions start returning expressions? Or is there no separation?

@bob-carpenter
Copy link
Collaborator

There's really no way to test that things work with expressions in general because there are lots of different expression types. The bugs are likely to come in from returning expression templates more than accepting them as input.

It'd be great to replace the code generation framework for testing distributions with something templated that'd play nicely with our other tests.

And yes, we're still way short of having full sets of tests for every function. It'd also be great to fix that.

@bbbales2
Copy link
Member

There's really no way to test that things work with expressions in general because there are lots of different expression types.

Right, there's that problem too.

How would you test these things?

There's a few different things I can think of:

  1. The code compiles with an expression input
  2. The values work with an expression input
  3. Gradients work with an expression input
  4. Gradients work with an expression input when the input variables to the expression have gone out of scope.

In all of these things 'An expression' is just any single expression type. @t4c1 what does it mean to be an expression? Is there a specific interface these things have?

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 19, 2020

There's really no way to test that things work with expressions in general because there are lots of different expression types.

Actually the number is infinite. Checking that it works with just one expression type is in my opinion enough. If it works woth o it is very likely to work with others. As things were before I started working on this most functions would not even compile if given an expression argument.

It'd be great to replace the code generation framework for testing distributions with something templated that'd play nicely with our other tests.
And yes, we're still way short of having full sets of tests for every function. It'd also be great to fix that.

While I agree it would be nice to improve testing in general, I am working on quite a number of things already.

Well since we can't lean on that, I guess we should just be adding tests as we go? Given the pull requests are on distributions, could we add something to test/prob?

This is already touching almost everry function in stan math. I would really prefere to avoid also touching every test. Since we can automate test generation and we have a list of signatures, why not use it? It will be much simpler and therefore also less bug prone. Another benefit of this approach is that any function signature added in future is automatically tested with zero developer effort.

How would you test these things?
There's a few different things I can think of:
The code compiles with an expression input
The values work with an expression input
Gradients work with an expression input

Yeah those.

Gradients work with an expression input when the input variables to the expression have gone out of scope.

Nope this will never work in with general expressions. We would need pretty deep changes to Eigen to make it possible. What we can do this for our functions. Ones implemented in Eigen (operators for example) will not work this way unless we write wrappers for those that will also ensure that.

In all of these things 'An expression' is just any single expression type. @t4c1 what does it mean to be an expression? Is there a specific interface these things have?

There is specific interface, but I think it is never exactly defined. You can take a look at how holder is implemented for a rough idea.

@bbbales2
Copy link
Member

Checking that it works with just one expression type is in my opinion enough.

That seems like enough to me too.

Nope this will never work in with general expressions. We would need pretty deep changes to Eigen to make it possible. What we can do this for our functions.

I misunderstood what holder does. I think I understand it better now (it's for returning local variables).

Since we can automate test generation and we have a list of signatures, why not use it?

If this is the way you think we should go, let's go this route.

What about this for the set of tests:

  • The code compiles with an expression input
  • The values work with an expression input
  • Gradients work with an expression input
  • Values/Gradients of expression output work as well
  • All combinations of expressions, non-expression input
  • All combinations of double/var types
  • Previous two things together

The combinatorics on the last three things sucks. Are there arguments to simplify this? Since expressions are something we have to manually code for, I don't see how we avoid some verboseness. It's similar to how we have to test propto and double/int logic carefully.

Even if the initial function test list comes from stanc3, these tests should live with Math since it's Math functionality.

For a pull request like: stan-dev/math#1945 what a reviewer needs to be able to do is look somewhere and verify:

  • The expressionness of functions are being tested
  • The tests are passing

So I assume that's another green dot on Github. With the tests above, everything is isolated, so regardless of how you write the tests, we can turn them on manually as functions are expression'd in pull requests. Just whoever does the pull needs an easy way to double check they're on (presumably that'll be part of the diff).

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 19, 2020

I don't think we need the whole combinatorics thing. What we need is to check that every argument of every overload can be expression (this can be the same test as for other arguments). As we dont know in tests which signatures are same overload we need to test each signature (combinations of of Eigen vector/std vector/double/int arguments). But for each of those we don't need to do all double/var/fvar combinations - it should be enough that each argument is tested once for each of these versions.

Now that we discussed tests some, I got an idea how to do those tests for just a part of functions. Since it is so important to everyone that those are done in each PR i will start working on the tests now.

@bbbales2
Copy link
Member

What we need is to check that every argument of every overload can be expression (this can be the same test as for other arguments).

That's probably fine. Offhand I don't see why we'd expect interaction effects.

There is specific interface, but I think it is never exactly defined.

I guess we're about to implicitly define it again by however the test expression works. Any expression type that behaves like the test expression will be expected to work.

@bob-carpenter
Copy link
Collaborator

How would you test these things?

Figuring out the base concept required and implementing a stub of that ourselves and testing it. Or we might be able to get away with an existing one if it tests everything.

Checking that it works with just one expression type is in my opinion enough.

I think there are at least a couple different types, but I could be misremembering. I think the blocking and other expressions don't share a common base.

Gradients work with an expression input when the input variables to the expression have gone out of scope.

We know that'll break, but we can't unit test that on functon f(x) because it's the responsibility of whatever created x.

Actually the number [of expression templates] is infinite.

It's finite in that there are only finitely many expression template classes defined in Eigen, but it's infinite in that they combine into new forms by template instantiation and new ones can get added at any time.

@bbbales2
Copy link
Member

@t4c1 I've got an argument that might be an expression or it might be a regular matrix and it might be doubles or vars. Is this how I should get a local eval-d version of the argument:

template <typename Derived>
auto func(const Eigen::MatrixBase<Derived>& x) {                                                                                                                 
  const auto& xv = x.eval();
  ...
}

I guess the explicit type is:

template <typename Derived>
auto func(const Eigen::MatrixBase<Derived>& x) {                                                                                                                 
  Eigen::Matrix<Eigen::MatrixBase<Derived>::Scalar,
                Eigen::MatrixBase<Derived>::RowsAtCompileTime,
                Eigen::MatrixBase<Derived>::ColsAtCompileTime> xv = x.eval();
  ...
}

Or is this a plain_type_t situation?

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 19, 2020

Actually there are multiple options. One is the one you found. Another one is using plain_type_t (you need reference to avoid copy if it is already a matrix):

const plain_type_t<Derived>& xv = x;

However the question is why do you need it evaled? Most likely you would also be fine with expressions that allow direct access to data. In that case use one of:

ref_type_t<Derived> x_ref = x;
const auto& x_ref = to_ref(x);

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 29, 2020

I am mostly finished with those tests. I figured we need to run these not only when there is some change in math functions, but also when support for new signature is added to compiler. So it would be best to put these tests in stanc3 repo. However, stanc3 is written in OCaml and there is no googletest in the repo. So I would put the tests in stan repo instead. I will open a PR now, we will just need to wait for @rok-cesnovar to make stan use stanc3 before merging.

@t4c1
Copy link
Contributor Author

t4c1 commented Jun 29, 2020

The PR for testing framework is up: stan-dev/stan#2933

Copy link
Member

@bbbales2 bbbales2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just gave this a quick review given the recent changes to expressions in Math. Having the testing is a big one that should be noted somewhere in here (How do we know we're doing this right? We're testing in this way!).

template <typename Mat1, typename Mat2,
typename = require_all_eigen_t<Mat1, Mat2>>
inline auto add(const Mat1& m1, const Mat2& m2) {
check_matching_dims("add", "m1", m1, "m2", m2);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will evaluate m1 and m2 twice I believe.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. check_matching_dims just checks dims - it does not evaluate.


So whenever an input argument, which might be an expression is more than once in a function it should be assigned to a variable of type `to_ref_t<T>`, where `T` is a the type of input argument (or alternatively call `to_ref` on the argument). That variable should be used everywhere in the function instead of directly using the input argument.

We also have to be careful that we do not make an expression returned from a function reference (matrix) variables that are local in that function (function arguments, which are not lvalue references cause the same issues). Matrices and similar expressions are referenced in expressions by reference, so when the expression is evaluated it will read memor, that could have been released or overwritten, causing wrong result or segmentation faults. A workaround to this problem has been suggested in https://github.com/stan-dev/math/issues/1775. It involves allocating such variables on heap and returning a custom Eigen operation that releases heap memory once it is destroyed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got implemented right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right.


Steps to implement:
- Generalize all functions to accept general Eigen expressions (already halfway complete at the time of writing this)
- Test that all functions work with general expressions (this can probably be automated with a script).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This got merged yesterday.

The design doc should link to what actual docs exist for using this: https://github.com/stan-dev/stan/wiki/Testing:-Unit-Tests#6-expression-testing-framework

Could also mention that these tests are in Stan and run when stanc3 or math change.

# Unresolved questions
[unresolved-questions]: #unresolved-questions

We might need some more trait metaprograms or helper functions to make working with generalized functions simpler.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have these gotten worked out? It's basically to_ref so far?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah so far we have to_ref snd ref_type_t. Can't say if we willl need more in future.

@t4c1
Copy link
Contributor Author

t4c1 commented Aug 5, 2020

For some reason I can't push here. I am getting:

remote: This repository was archived so it is read-only.
fatal: unable to access 'https://github.com/bstatcomp/design-docs/': The requested URL returned error: 403

@rok-cesnovar
Copy link
Member

rok-cesnovar commented Aug 5, 2020

That is on me. I told Erik to archive our fork in a cleanup of the forks in our group. Forgot that this PR exists.

@andrjohns
Copy link
Contributor

@rok-cesnovar Since this is on a bstatcomp fork, would you be able to rename this design doc so that the number doesn't clash (I think 0007-... is free)? Then we can approve and merge, since this is already in place in Math

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants