-
Notifications
You must be signed in to change notification settings - Fork 492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] Add new burn-vision crate with one initial op #2753
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is a backend extension, what do you think about keeping the implementation within the vision crate instead?
Similar to how you've added a default cpu_impl
, the vision crate would look something like this:
burn-vison/src/
└── ops/
├── cpu_impl/
├── jit/
├── mod.rs
└── base.rs
It would only move your current additions in burn-jit/src/kernel/vision
to burn-vision/src/ops/jit
(and change the imports to use burn_jit
). The jit
module can be under a feature flag.
That way, the backend implementations are kept to the required stuff only and the vision trait implementations are isolated within the crate.
That's how I originally had it, I wasn't sure which way was correct. I guess that's one vote for keeping it in the crate. Anyways I'm still working on CPU support, all the fast algorithms use decision forests and they're full of |
Yeah I was debating that as well when going over the code. But I think this is too specific to live in the backend implementations like you said, it will probably stay restricted to our own backends and not third-party backends. So it makes more sense in this state to have it as an extension in the vision crate.
Yeah I'm sure opencv would be a pain to use here 😬 but at the same time it offers so much for image processing lol. Must be a pain to link though. A default CPU implementation is not a hard requirement either for the vision crate (that is, unless you also need it for your own application lol). This is an extension after all.. |
My vision (hah) for this crate is to provide easy access to image pre and postprocessing for burn, since the current solutions are very awkward (imageproc is pretty incomplete and doesn't support GPU acceleration, opencv is a pain to build and transferring data across FFI is hard). Part of that is always having a fallback implementation available, so you know it always at least works, and uses acceleration where possible. Ideally, if someone is missing an operation, we could relatively easily port the default opencv implementation from C++ to Rust and set it as the default, then if someone wants to add GPU support it can be done later. Once I get GRAPHGEN up and running it should be fairly easy to do even with decision forest algorithms. That's why I'm attempting to modify the codegen instead of manually translating the generated code. |
Not gonna lie that would be pretty awesome 👀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not familiar with these CV algorithms so I can't really comment on that, but otherwise the vision crate as an extension looks good!
The codegen is not pretty but glad to see that you managed to get it working 😅
Some small stuff below.
@nathanielsimard new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another point to keep all the kernels in burn-vision
. Compiling kernels is quite expensive, and if they are all within the same crate, they can't be parallelized by Cargo. I plan to move the fusion
portion of burn-jit
into its own crate for that reason as well, so less stuff in burn-jit
is better.
crates/burn-vision/src/backends/jit/connected_components/hardware_accelerated.rs
Outdated
Show resolved
Hide resolved
#[cfg(feature = "export-tests")] | ||
#[allow(missing_docs)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need to export the tests, since they run in this crate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Integration tests count as running in a separate crate, and we need to use integration tests because of annoying rust limitations around macro_export
(which is generated by burn-tensor-testgen
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we're using burn-tensor-testgen
in burn-core
, and yeah we're probably not running the tests as integration tests for that reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might want to parameterize some of the tests at some point, so I think using testgen right away is better. I just didn't add a parameterized test yet because connected components are almost never used with anything other than u32. I believe opencv also supports u16 but I don't think it's work adding parameterized tests until there's more ops. It's good to have the infrastructure for it regardless.
…into feat/burn-vision
Pull Request Template
Checklist
run-checks all
script has been executed.Changes
Adds a new
burn-vision
crate and an initial implementation ofconnected_components
andconnected_components_with_stats
(as in opencv).The algorithms are WIP and rely on tracel-ai/cubecl#446, but I'd like some early feedback on the structure of the new crate and the backend implementations.
Testing
Adds new connected components tests.