-
Notifications
You must be signed in to change notification settings - Fork 657
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Calculation Method of "latency_ms" in Node "multi_object_tracker" Seems Unreasonable #9428
Comments
@cyn-liu By turn |
After testing, I believe that As stated in the PR you provided, the reason for the instability of |
@cyn-liu When the detection is not reached (over than 100 ms + margin), It publishes estimated tracking result. Since the estimation is done by old measurements, the pipeline latency is enlarged one additional cycle. Because of the extrapolation function and the fluctuating detection latency, the pipeline latency will fluctuating. In my understand, this is by design for now. |
@cyn-liu You may can have an experiment that disabling the extrapolation. autoware.universe/perception/autoware_multi_object_tracker/src/multi_object_tracker_node.cpp Line 242 in 5372403
|
You mentioned multiple times above that node The following figure shows the latency: the time when before_tracking_latency.mp4 |
Then you have a good object detection pipeline. That is a good thing.
What is this topic? Is this your custom topic for system analysis?
The pipeline latency is determined when toe tracked object is published. If you set
Why the oldest data is the pipeline latency timing even the tracker is updated newer data? |
The My custom topic: It is written at this position, debug/input_latency_ms: When I have a good object detection pipeline, my custom topic has stable values, but the value of
This is also my question, why does the Autoware code calculate like this. |
I asked myself why it is. I could somehow recall the situation. I think there is two perspective to the latency measurements.
The current implementation is on the case 1 side. If we think the case 2 is right, I think it need to be fixed. |
Overall naming of function and variable are misleading now. This is due to the trigger algorithm changes over time.
Incomming |
@cyn-liu Are you using multiple input to the multi_object_tracker? Let me figure out how to solve this. |
The naming of functions or variables in the Autoware code is indeed somewhat misleading, but const rclcpp::Time oldest_time(objects_list.front().second.header.stamp);
last_updated_time_ = current_time;
// process start
debugger_->startMeasurementTime(this->now(), oldest_time); |
My perception module uses |
I have tested your PR and the results look consistent with your previous explanation.
have_delay_compensation.mp4
no_delay_compensation.mp4 |
Checklist
Description
pipeline_latency_ms
represents the time it takes for the entire pipeline from point cloud publishing to the completion of execution at the current node.We found node
/perception/object_recognition/tracking/multi_object_tracker
'sdebug/pipeline_latency_ms
is large and the data fluctuates greatly.pipeline_latency.mp4
We believe that there is a problem with the calculation method of
/perception/object_recognition/tracking/multi_object_tracker/debug/pipeline_latency_ms
, which cannot actually reflect the pipeline latency from publishing the point cloud to completing the operation of nodemulti_object_tracker
.Expected behavior
/perception/object_recognition/tracking/multi_object_tracker/debug/pipeline_latency_ms
is slightly large than/perception/object_recognition/detection/object_lanelet_filter/debug/pipeline_latency_ms
.Actual behavior
/perception/object_recognition/tracking/multi_object_tracker/debug/pipeline_latency_ms
is much large than/perception/object_recognition/detection/object_lanelet_filter/debug/pipeline_latency_ms
.Steps to reproduce
logging_simulator
rqt_plot
to view/perception/object_recognition/tracking/multi_object_tracker/debug/pipeline_latency_ms
Versions
Possible causes
onTime
.Additional context
None
The text was updated successfully, but these errors were encountered: