-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llvm backend (online) #8
Comments
I propose we use the ORC JIT mechanism introduced in LLVM 3.7. This (vs. MCJIT from LLVM 2.9+) has some advantages including lazy compilation, a more reasonable and thin API, better memory management, and is more future-proof than MCJIT. Potential downsides: ORC has not been overly performance optimized, but that is just about compilation time. |
Sounds reasonable to me. I'm not worried about the optimization; I'm sure that will come. The main selling point for me is the future-proof-ness. |
Hey guys, great work! I was curious about what prevents taco from being part of Halide. After all, besides sparsity, tensor algebra computations can benefit from tiling, vectorization, parallelization, etc, which are all represented in Halide. Is the main challenge that Halide is design for linear algebra and not tensor algebra? or it is more of an engineering choice now to avoid disturbing Halide development with major changes? or because they serve two different application domains? P.S. This question comes from someone whose only experience with either taco or Halide is that he read their papers :). |
Hi @ElTantawy! Halide is a grid/stencil language with some reduction support. It should be possible to define dense tensor algebra on top of this. The main conceptual difference I see is the general sparsity that taco supports and the complexity that follows. This results in indirect accesses that may or may not work with Halide's internals. We have certainly borrowed from Halide's internal design, and even use some of their error handling code, but I think integrating the code bases would require a lot of work to generalize both. It's probably a great research project to separately create a general substrate that both could use, but engineering-wise I think it would slow down the development of both. |
tests: fix cannonMM implementation to scale k with gx
We need to develop an llvm backend in the same style as Simit/Halide. This backend should support compiling the taco IR to llvm IR and then have llvm compile the llvm IR to a function pointer.
The text was updated successfully, but these errors were encountered: