-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize what the realsense cameras are doing #11
Comments
Do we need to get the IR feeds? we definitely want to get depth. It might be worth having just the one left IR camera, don't think we need both. But I can't tell if we can get depth without having them on. |
apparently aligning depth to color is also taxing. This is another thing that we should be able to do later given the data. <- need to test |
Compliling without OpenMP might improve things: IntelRealSense/librealsense#1130 |
Part of the problem was that it was publishing and therefore transmitting over USB the infrared streams. That was fixed in #28 |
Ok, so we mostly need to test that the data we are getting is enough to reconstruct point clouds, align the depth to the color, etc. |
Ok, so there are a few options for recording and reconstructing the data later:
After going through all of this. I think that the way to go is to inject the frames into a synthetic software device and then process from there. I think this can be done in either python or C++. Can manually integrate with ROS. |
Right now the realsense cameras are publishing a lot, they take up something like 30% of the CPU. It has to be possible to reduce the load. Maybe there are things we don't need. The first candidate would be point clouds. We can always reconstruct point clouds later if we need to, right? This idea needs testing and experimentation
The text was updated successfully, but these errors were encountered: