-
Notifications
You must be signed in to change notification settings - Fork 323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Adding an ONNX to Torch conversion #1639
Comments
We are generally receptive, and we already have an effort from onnx to mhlo and onnx to tossa. Have you considered piggybacking on these? Namely from mhlo or toss to torch-mlid? |
We have
We are all basing off of the green commit in llvm/torch-mlir#1178 so this should not be an issue.
This shouldn't be needed. Builtin tensors, which onnx-mlir appears to use, already have value semantics, so you should just directly convert builtin tensors to |
Sounds good. We have few customers who would like to see this unified to avoid having two separate paths. nod.ai team can contribute to support the required passes (as we do in If this sounds reasonable we can get @qedawkins to refine the RFC with specific commits and start putting them up for review. |
Right now, the MHLO is making rapid progress, which would integrate the ONNX->TF ecosystem. If there is sufficient support on your end, I can see the benefit of having a ONNX -> Torch ecosystem as well. Can you comment on the technical merit of having a direct ONNX->Torch-MLIR vs ONNX->MHLO for your particular use? |
So for Torch-MLIR it would be a no-op technically. The idea is that since Torch-MLIR already has these lowerings written, then onnx-mlir could just use them. A few technical advantages of
A few potential downsides:
|
Thanks for the feedback. We are definitely encouraging the MHLO onnx-mlir developers to migrate to StableHLO and I believe they agreed once it becomes stable... think it is almost there. I agree also that with a stronger connection to MLIR via MHLO and possibly Torch, onnx to tosa might become redundant and since I have not seen lots of progress lately (apologies if mischaracterized) that might be a better way to go about that. Given the small teams working on these projects, if there is a way to have common infrastructure (reusing when possible) between the MHLO and Torch, that would be great. Exploring structures to enable this kind of reuse would strengthen both effort and reduce redundancy, as AI models (ONNX, Torch, TF) do a lot of incremental changes that all need to be maintained and upgraded over time. So more reuse means less updating of redundant activities... Along a similar way, we have a unified way to do shape inference to detect compile time shapes, and reuse the same code (in a different context) to generate the runtime code for runtime shapes. MHLO has started approaching custom code for dyn shapes, and we are encouraging to reuse the common infrastructure between shape inference for compile time and run time shapes. |
We have the exact same thing: https://github.com/llvm/torch-mlir/blob/main/docs/shape_lib.md 🤣 |
Good to know... looks like we went through the same pain points, having 2 similar code for shape inference and later code gen, to unify both for lower maintenance. We also have not generated guard yet, though it is also in the back of our mind. |
As I understand the propose here is to integrate |
The idea is that |
Just want to see the big picture. Could you elevate a bit about how end-users use this pipeline? For example, how they prepare inputs for running the compiled model and get outputs. Your vision is to use onnx-mlir driver for that purpose or expect a new driver, or the driver already exists.
Yes, sure, it makes sense. |
I'm imagining that onnx-mlir would just link to Torch-MLIR and run a few of Torch-MLIR's passes as an internal implementation detail to get from ONNX -> {MHLO,TOSA,Linalg} depending on the user wants. It shouldn't touch things like preparing inputs/drivers/etc. |
We (nod.ai) have couple customers that require this - and we plan to support them somehow. And that is why we added the initial RFC with implementation in torch-mlir (which included onnx / onnx-mlir as deps) and then this RFC+implementation with torch-mlir as a dep in onnx-mlir. Basically the customers want to avoid two paths to MHLO (which they care about) and two kinds / levels of ops/shape support. We (nod.ai) team contribute a lot to torch-mlir to maintain the torch-dialect->{MHLO, TOSA, Linalg} and if we can leverage an ONNX entry point into torch-dialect and then leverage all the other lowerings it is one set of backend lowerings with two "frontends" torch and onnx. The end product would like like a compiler/binary that can consume ONNX modules as an Onnxruntime EP or can consume Torch in some other serving solution like Triton Inference server. The lowering to torch-dialect may be different but after that it is all the same allowing backend Accelerators to easily consume from both frameworks. |
@Connor-XY @yaochengji Could you please chime in? Would like to hear your comments about |
I think it would be great to have the conversion from ONNX to Torch, but I also believe that it is better to have the conversion from ONNX to MHLO directly. First, we could do the conversion directly so it isn’t necessary to use Torch as the bridge. Second, if we adopt ONNX->Torch->MHLO and there is a version upgrade, we need to modify both ONNX->Torch and Torch->MHLO. The conversion would rely on Torch (onnx-milr development would rely on Torch more). |
To add what @Connor-XY said, thanks for the help of @tungld, @AlexandreEichenberger and all other community members, we are currently applying onnx-mlir on our business models in ByteDance. We'll definitely try the ONNX->Torch route after it is mature. And before that, we'll keep maintaining and developing the ONNX-MHLO route. |
The idea of a robust conversions between ONNX and Torch front-ends seems interesting. Excited to see how this RFC evolves. It seems reasonable to avoid duplicating the effort to support intermediate dialects and features like shape inference. |
Do you think you could present the benefits of this direct approach at our Tu evening meeting, either next week or in 2 weeks? |
We can aim for a presentation + discussion at next week's meeting if that works for you. Torch-MLIR has documentation you can reference for the time being here: https://github.com/llvm/torch-mlir/tree/main/docs The architecture document (https://github.com/llvm/torch-mlir/blob/main/docs/architecture.md) might be a good place to start. |
@AlexandreEichenberger What is the status on this? Is tomorrow's meeting still a good time to discuss this? |
Absolutely, you can have a 15 min slot at our 8-9pm est meeting. https://github.com/onnx/onnx-mlir/wiki/Informal-meeting-agenda-and-notes |
Would be great if you could explain the value add of the onnx-torch lowering in the picture of the torch-mlir effort: |
There is no value-add for Torch-MLIR (it is actually a minor cost for us to support this). It is purely ONNX-MLIR reusing work already done for Torch-MLIR. It is a win for the ecosystem to maintain fewer lowerings, in theory. |
Got it. Found your https://github.com/llvm/torch-mlir/blob/main/docs/architecture.md instructing. If you could give us an idea of where you would intersect ONNX dialect with the dialects in that doc, that would let us focus our attention a bit more on the relevant dialect, tx |
ONNX-MLIR would produce the |
Is there a similar effort to have a "stable torch dialect", like "stableMHLO"? |
No, our dialect is only as stable as the torch op set (which is fairly stable, but not guaranteed) |
@qedawkins would you be able to add a pdf of your presentation yesterday? for these that were not able to be present. Tx |
Sure! |
I have a summary of our Tuesday evening discussion in the wiki pages from this project here. As a follow on to this discussion, we would like to have feedback from the teams developing onnx->tosa and onnx->mhlo converters, as this RFC may also be able to generate tossa or mhlo output via the torch-mlir backends currently available. I am listing below developers that I see associated with these two projects; if you know of any other, please add references here and/or contact them directly. Thanks @Connor-XY @yaochengji @chenchongsong @BraunPhilipp @sjarus Please provide feedback as you see fit within a week, would be great to have it before the next meeting where we will take a position on this RFC. You are obviously welcome to participate at the meeting too, now open to the whole onnx-mlir community. See wiki page for info on how to join. |
Adding @eric-k256 @stephenneuendorffer and @ljfitz for input on ONNX-TOSA in the context of the likely presence of ONNX-Torch and the existing Torch-TOSA. |
ONNX2Torch route should be useful, especially when we are considering changing to stablehlo. The biggest concern we have is that ONNX -> Torch -> MHLO route is more complex than ONNX -> MHLO, which introduces more potential issues. And the ONNX->MHLO route is already applied on the business models in Bytedance. The model coverage is quite high ( about 50%), considering we've only worked on this for 2 months. Therefore we will continue to work on the ONNX->MHLO route to increase the model coverage in Bytedance. For the ONNX -> Torch -> MHLO route, if we later find that it is mature enough, we will switch to it. |
Thanks for your feedback @yaochengji, this is very useful. Primary concern to us is to have mature implementation of dialects, so knowing that your team will continue working on MHLO is good to know. Hopefully you can also resolve the handling of dynamic shapes in a way that reuses onnx-mlir infra would be great and unblock some of the MHLO PRs that have been blocked for longer than I like. |
Yes, we're actively working on this. I've already discussed with @Connor-XY offline and figured out how to use this correctly. I guess @Connor-XY could resolve this soon. |
I mentioned that we have a way to annotate code/makefile/xxx and then gather annotations to generate a report of coverage. We do this for our support of lowering to CPU and to our NNPA based accelerator. This same mechanism can be used to generate a report of coverage of a converter from ONNX to TOSA/MHLO/Torch-MLIR. PR describes format and will let you know which file/makefile to edit. We added link to reports on main README.md page. PR #1475. But check current code as we may have modified the format a bit. |
Thanks. We'll take a look at this. |
Any additional feedback? So far, there is mostly positive feedback. One thing we learned from MHLO effort is that it is nice to have a switch that disable building the MHLO part. CI all work with everything on, but certain users may elect to not have parts that are not needed. This would also be a feature that would be good to have for torch-mlir converter. |
Yes, Torch-MLIR has a build flag to disable building the MHLO path. |
Was asking if you would also support a flag disabling building Torch-MLIR within onnx-mlir |
Ah yes, that's a good idea. |
As previously stated, I believe that unless we hear any additional concerns by today, we should go ahead with your RFC. |
I believe you should go ahead and start PRs to implement your converter. Good luck. |
Thank you all for the feedback and helpful discussion. I've opened a PR #1731 where we can begin ironing out the details and get an initial version of this merged. |
Hello everyone, in a recent RFC in torch-mlir a conversion from ONNX dialect to the Torch dialect in torch-mlir was proposed (llvm/torch-mlir#1255). After feedback we are looking at adding that conversion pass to onnx-mlir.
Proposal
Proof of Concept
A proof of concept for the conversion can be found at: https://github.com/nod-ai/onnx-mlir/tree/convert-to-torch
A conversion for ONNXAddOp and ONNXConstantOp is included along with unit tests for each. This includes five passes in the conversion pipeline:
Looking forward to any and all feedback!
cc @powderluv @silvasean @sstamenova @ashay
The text was updated successfully, but these errors were encountered: