Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finding low overhead webcam processing pipeline #79

Open
abitrolly opened this issue Feb 17, 2022 · 0 comments
Open

Finding low overhead webcam processing pipeline #79

abitrolly opened this issue Feb 17, 2022 · 0 comments

Comments

@abitrolly
Copy link
Member

I need to a background filter with custom code using v4l2loopback or similar driver. The driver creates camera device that can be written an image, and that image can then be read as a webcam video stream. That already works.

The problem is to create this pipeline with lowest overhead.

[real camera] -> [video memory] --[filter]-> [v4l2loopback] -> [vlc, jitsi, whatever]

Ideally if no filter is applied, the same memory should be used to for reading from real camera and feeding it to vlc. This to make sure the memory usage is predictable. Need to know if it is possible to use the same region for reading and for writing. If reader (vlc) is slower that writer (high speed camera), it would be interesting to explore different locking mechanisms to pause the writer until reader finishes processing. The same would be useful for filter that operates on the same memory region.

Different formats of pixels from camera will need conversion. Conversion adds overhead, which needs to be explored, especially for high speed camera processing. For example, OpenCV works with BGR and AI/NN solutions probably prefer RGB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant