-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] Generic ROS2 driver output for spatial yolo is incorrect #548
Comments
i tested this:
and the results look fine as well. its only with the camera.cpp generic pipeline where the results are bad |
Hi, thanks for the report, could you try testing with following parameters: nn:
i_disable_resize: false
rgb:
i_preview_size: 416 |
thanks for your reply: im using this config:
i have re-written one of the examples (depthai_examples/yolov4_spatial_publisher.cpp) and have nearly 1:1 hardcoded all the yaml parameters (from my config above) into the c++ pipeline. The resulting node works fine (spatial information is correct). That's why i assume that I have configured something wrong, or something in the pipeline is not created correctly (inside the camera.cpp driver) |
Hello,
i try to use the yolotiny4 with spatial information via the camera.cpp ros node (via the camera.launch.py). The model runs and the inference results in proper classification, but the spatial information is way off. I get -3.0 to 3.0 meters in all axis (x,y,z) for the pose.position while identifying a human (myself) sitting directly in front of the camera.
Position Log while im sitting ~50cm in front of the camera
The resulting pose is also very noisy, so i suspect that there is something wrong with it.
I have a working solution with this example:
Here, the pipeline is obviously created manually (rather than the generic ros driver pipeline), which works rather good. The same model outputs reasonable xyz coordinates (in mm because its taken directly from the output) for
Minimal Reproducible Example
Start
and watch the output of
ros2 topic echo /oak/nn/spatial_detections
while detecting something with the cameraExpected behavior
I would expect outputs like in this example
I ran the example like this:
python3 spatial_tiny_yolo.py
Position Log (x y z) while Im sitting ~50cm in front of the camera
Can someone tell me why there is such a difference between the output quality? It seems like a bug to me.
I also tried setting the following parameters in the .yaml config (to make the pipeline more similar to the example)
The text was updated successfully, but these errors were encountered: