Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce latency ROS #541

Open
abegni opened this issue Jun 5, 2024 · 6 comments
Open

Reduce latency ROS #541

abegni opened this issue Jun 5, 2024 · 6 comments
Labels
question Further information is requested

Comments

@abegni
Copy link

abegni commented Jun 5, 2024

Hi,
I'm quite new with OAK cameras. I'm using a OAK-D Pro Wide PoE. I need to know some info in order to set and manage the camera and stereo parameters directly from ROS1 param. I read the guide https://docs.luxonis.com/software/ros/depthai-ros/driver/. At the end of it there is the List of parameters and i didn't understand how to manage them in a easy way. I would like to set them in order to reduce latency also at the expense of quality of stereo image. I need to use only RBG, Pointcloud and Depthimage output. So I'm using rgbd_pcl.launch. With the default parameters the latency is around 1 second.
Any suggest will be really appreciate.
Thank you

@abegni abegni added the question Further information is requested label Jun 5, 2024
@abegni abegni changed the title [custom] Reduce latency ROS Reduce latency ROS Jun 5, 2024
@Serafadam
Copy link
Collaborator

Hi, for latency improvements you can also refer to this doc page, ROS parameters/logic of normal DAI nodes, usually setting low_bandwidth_mode and reducing image size helps.

@abegni
Copy link
Author

abegni commented Jun 10, 2024

As you suggested, I tried to use low_bandwidth_mode. The latency is definitely digressed and also depth image's quality.
It's not so clear how exactly low_bandwidth_mode works, which parameters changes.

@abegni
Copy link
Author

abegni commented Jun 19, 2024

... It is still complicated how to understand which package (between depthai_ros_driver and depthai_examples) is the best in order to start the cameras with both RGB and Depth output images.

My setup:

  • Ubuntu 20.04
  • ROS Noetic
  • OAK-D-PRO Wide POE
  • Network Type: Ethernet max 1Gbps

With reference to the Setup outlined above, I tried to use camera.launch (depthai_ros_driver package) with the following parameters (because otherwise it fills the bandwidth):

  • camera_i_enable_imu: true
  • camera_i_enable_ir: true
  • camera_i_floodlight_brightness: 0.0
  • camera_i_ip: '192.168.2.10'
  • camera_i_laser_dot_brightness: 800.0
  • camera_i_mx_id: ''
  • camera_i_nn_type: none
  • camera_i_pipeline_type: rgbd
  • rgb_i_resolution: 1080P
  • stero_i_resolution: 480P
  • rgb_i_fps: 10
  • right_i_fps: 10
  • left_i_fps: 10
  • stereo_i_fps: 10
  • rgb_i_low_bandwidth: true
  • left_i_low_bandwidth: true
  • right_i_low_bandwidth: true
  • stereo_i_low_bandwidth: true
  • left_i_publish_topic: false
  • right_i_publish_topic: false

However, the output is still far from being acceptable (low quality, big latency). Additionally, I tried to use the depthai_example package; more precisely the stereo_nodelet.launch sample launch file. It seems to me that with stereo_nodelet.launch performs well, but it doesn't contain the RGB stream which i also would like to have.

Moreover, from what I observed the codes in C++ and the launch files included in both the depthai_ros_driver package and the depthai_examples are completely different from each other. At this point it is completely unclear which is the best way to proceed to set up the cameras using ROS, and at the same time achieving acceptable performances.

Thank you for any help

@Serafadam
Copy link
Collaborator

Hi, could you check your network settings? With these parameters latency should be negligible so I suspect the issue might be on the network side. I think it would be the best to start witch checking if your setup supports Jumbo frames

@abegni
Copy link
Author

abegni commented Jun 25, 2024

This is my current setup:

In Ubunut netaplan:
network:
renderer: NetworkManager
version: 2
ethernets:
enp7s0:
dhcp4: no
dhcp6: no
addresses:
- 192.168.2.1/24
gateway4: 192.168.2.10
mtu: 9000

In camera.cpp I modified the code in "void Camera::getDeviceType()" method as follows:

    /*Activate jumbo frame for camera device*/
    dai::BoardConfig board;
    board.network.mtu = 9000;
    board.network.xlinkTcpNoDelay = false;
    board.sysctl.push_back("net.inet.tcp.delayed_ack=1");
    /****************************************/

    pipeline = std::make_shared<dai::Pipeline>();

    /****************************************/
    pipeline -> setBoardConfig(board);
    /****************************************/

Is there any way to know if the jumbo frame are used with this configuration?

@Serafadam
Copy link
Collaborator

Hi, here you can find some additional information on testing jumbo frames. Additionally, I'm not sure if you already checked it but here is some additional documentation on PoE cameras latency optimization.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants