-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ground truth txt format label for 3D tracking #4
Comments
in KITTI format, JRDB camera box dimension is a bit different from KITTI camera: JRDB width = KITTI length I would suggest swapping the width and length in the .txt files and then re-submit to the server. |
Hi @20chase, please also make sure you're using testing and submitting to the same dataset (JRDB 2019 vs JRDB 2022). Note that the images in the two datasets are differently indexed, which may cause issues if you're using the wrong set for testing. |
Surely, we only downloaded the JRDB 2019 and only used point cloud as our algorithm's input. |
Thanks for your suggestions. We will try it. BTW, is it possible to increase the submission quota to 5 tries per month? |
Hi @20chase, Please download the whole new JRDB 2022 dataset, otherwise you'll need to select "JRDB 2019" when making submissions. |
Hi @ldtho @evendrow @JRDB-dataset We also found conflict descriptions about the tracking submission in the official website. The upper line shows the format should be We suppose the upper line format is correct and submitted our results following this format. |
Hi @20chase, thank you for pointing out, the format should be |
Hi @JRDB-dataset,
As I mentioned in previous issue, I am writing to request the ground truth txt format label for 3d tracking, which can help researchers to correctly evaluate their algorithm in training set.
Here is our situation:
For now, we are using the generated the ground truth txt files from the provided json files in
our own understanding
. However, there is a large performance gap between my submission in test set and our own evaluation in training set; The submission in test set can only achieve 18% MOTA, but in our own evaluation in training set, we can achieve 35% MOTA. We are confused about the performance gap, but the missing of official ground truth txt format label files makes the problem is hard to figure out. We think many other researchers may also encounter issue.Looking forward to your reply. Thanks
The text was updated successfully, but these errors were encountered: