Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

generation finalisation #18

Closed
wants to merge 22 commits into from
Closed

Conversation

nopeslide
Copy link
Collaborator

@nopeslide nopeslide commented May 11, 2020

  • renamed check structure to info structure
  • added info struct to operator sets
  • added info&resolver for operator stub
  • added a dryrun option (run everything but do not touch the filesystem)
  • added a Template base class for code generation
  • refactored all code generation onto this new base class
  • info struct can be used by non-generated generic check functions
  • added these generic check functions
  • removed all check generation

TODO

@alrevuelta
Copy link
Owner

@nopeslide Sorry for the late reply. My 5 cents :)

I can see that you are using a doubly linked list. Is there any benefit from that? Maybe for variadic inputs? But even if its variadic, we have in the Node the TensorProto** and the n_inputs, so why using it?

I like having the Onnx__NodeProto as we discussed, but can't see why we need to wrap the tensors into operator_info_tensor and having duplicated content in operator_info.

Lets imagine we call an operator, and we need some input tensor for calculating whatever. We have the operator_context as input, that contains operator_context_input. This contains an array of operator_tensor that also contains the set of inputs Onnx__TensorProto **input; and the operator_info for that operator. How do we find the input that we want? Mind writing some pseude code on how to access lets say input A.

As I previously explained in #17 I don't think we need to wrap the NodeProto, just add the TensorProto inputs/outputs pointers and thats all the information we need.

Perhaps you can hardcode the mnist model to make it work with this code so we can see it in action. I will try to implement my version presented in #17 also.

Cheers!

@nopeslide
Copy link
Collaborator Author

@nopeslide Sorry for the late reply. My 5 cents :)

I can see that you are using a doubly linked list. Is there any benefit from that? Maybe for variadic inputs? But even if its variadic, we have in the Node the TensorProto** and the n_inputs, so why using it?

I intented to use it for managing objects created by us (global inputs, global outputs, contexts).
Until now you seem to have the pointers to OnnxProto objects hardcoded in an array somewhere, this is a proposal to get rid of these arrays.

I like having the Onnx__NodeProto as we discussed, but can't see why we need to wrap the tensors into operator_info_tensor and having duplicated content in operator_info.

the operator_info structure is basically the same structs I introduced with the sanity checks.
I just renamed them, linked to them in the operator set and replaced the resolve function pointer inside the context to a pointer to the operator_info struct.

Lets imagine we call an operator, and we need some input tensor for calculating whatever. We have the operator_context as input, that contains operator_context_input. This contains an array of operator_tensor that also contains the set of inputs Onnx__TensorProto **input; and the operator_info for that operator. How do we find the input that we want? Mind writing some pseude code on how to access lets say input A.

either by position as before:

operator_status
operator_add( operator_context *ctx )
{
  //various access variants or the same ctx pointer
  Onnx__TensorProto *A = ((operator_context_add*)ctx)->input->A;
  Onnx__TensorProto *A = ((operator_context_input_add*)(ctx->input))->A;
  Onnx__TensorProto *A = ctx->input[0];
}

As I previously explained in #17 I don't think we need to wrap the NodeProto, just add the TensorProto inputs/outputs pointers and thats all the information we need.

the only 'extra' thing I did was creating an order list for the attributes, so we can access them by position (over structs or array) and not by searching. that's the thing I wanted to show to you :D

Perhaps you can hardcode the mnist model to make it work with this code so we can see it in action. I will try to implement my version presented in #17 also.

Will do, just wanted to show you what I meant.

Cheers!

@nopeslide nopeslide force-pushed the onnx branch 2 times, most recently from 8cc2361 to ee7a016 Compare June 2, 2020 13:04
@nopeslide nopeslide mentioned this pull request Jun 4, 2020
@alrevuelta alrevuelta mentioned this pull request Jun 30, 2020
@nopeslide nopeslide changed the title context generation generation finalisation Jun 30, 2020
@alrevuelta
Copy link
Owner

Just thought that we could also generate the files included inside src/operators/implementation (i.e. operator__onnx__relu__6__T_tensor_float.c) from the Python script. So we can avoid manually creating them. Maybe all types is too much by now, so I would create only the float variants.

Quite useful also if someone wants to implement an operator. If the file and function is already in place (but empty) would be easier for someone without much knowledge of the code to implement and operator. We can abstract them from the complexity of running the Python script and so on.

I like keeping the PRs short, so maybe we can have a new one for this small feature instead of adding it to this one.

@nopeslide nopeslide force-pushed the onnx branch 5 times, most recently from 604c51d to 19a50ea Compare July 3, 2020 17:23
@nopeslide nopeslide marked this pull request as ready for review July 3, 2020 17:26
@nopeslide nopeslide requested a review from alrevuelta July 3, 2020 17:26
@nopeslide
Copy link
Collaborator Author

windows seems to have the same problems as mac :/

@nopeslide nopeslide mentioned this pull request Jul 4, 2020
nopeslide added 12 commits July 8, 2020 16:18
- renamed check structure to info structure
- added info struct to operator sets
- added info&resolver for operator stub
the formatting destroys ascii drawings
- simplified generation scripts
- removed SanityChecks generation
  it is already covered by operator info and static testing functions
- check/no_check, replaced by info
- info_header/no_info_header removed, merged into header
- info_src/no_info_src renamed to info/no_info
@nopeslide
Copy link
Collaborator Author

closed in favor to #34

@nopeslide nopeslide closed this Jul 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants