-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Support for an embedded build of Torch-MLIR #1411
Comments
The cmake gymnastics for this are not as clean as I would like but we do have a few examples. In general, the code location doesn't matter so much as that the project is set up to be used as an llvm external project. I think this has been partially done for torch-mlir at some point. You found mlir-hlo as one example. iree-dialects was also setup this way: https://github.com/iree-org/iree/blob/main/llvm-external-projects/iree-dialects/CMakeLists.txt You may find that there is a core part of the project that you want to depend on this way that should be separated from the rest. The other option which some are more familiar with is to rig it so that it can just be included as normal with add_subdirectory from an arbitrary parent. And if the parent brings its own llvm dependency, then defer to that vs setting up your own. We don't have a lot of standardization around any of this: if you find some patterns that work, it would be great to document/clean them up/etc. |
Thanks for the comments. As you said, it's not really a matter of where the code is but more that the options for building Torch-MLIR embedded have room to be formalized (although I am no cmake expert). For ONNX-MLIR we can just work with the out-of-tree build, but it might not a robust solution if there are other projects looking to use Torch-MLIR in a similar manner. There isn't an urgency here but if there is interest in supporting something like this it would be good to know. Also if I come up with some pattern that works I will definitely share, although ONNX-MLIR isn't setup like a typical MLIR project. |
I think it is a good idea to pursue. There was really no reason it wasn't set up this way except for the early stage of the project. I'm happy to consult/review, but having done a couple of these, I'd prefer to spread the love/knowledge a bit vs just doing the surgery myself. |
We also meet the same issue while building torch mlir in an embedded way and have some patches on local. I'll share it here for your information: https://gist.github.com/ZihengJiang/65e182f82e510b87fc4e07feb9c21648 cc @Vremold |
Hi all,
With the recent RFC in ONNX-MLIR for a conversion path from ONNX-MLIR to Torch-MLIR (#1255 and then onnx/onnx-mlir#1639) there is a use case for building Torch-MLIR as part of another project (onnx/onnx-mlir#1731).
Currently we can support this in ONNX-MLIR by treating it as an out-of-tree build (see #1403), but some discussion in #1403 suggested that having formal support for building embedded in another project (whether it be in tree or out-of-tree) without any external dependencies may be worth pursuing (e.g. no llvm-project submodule). This could be similar to the way that mlir-hlo has the code in "two places", one for pull requests/commits and another for use as a submodule by other projects without depending on the full TensorFlow monorepo (see https://github.com/tensorflow/mlir-hlo).
Comments and suggestions are appreciated!
cc @powderluv @silvasean
The text was updated successfully, but these errors were encountered: