-
Notifications
You must be signed in to change notification settings - Fork 605
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[NO MERGE] Evaluate detections with map_samples
#5497
base: develop
Are you sure you want to change the base?
Conversation
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
Full run against bdd100k: In general we do see an improvement as the number of workers increases, but it does not scale linearly in a noticeable manner
|
@minhtuev it looks like relevant tests are failing... |
Thanks @kaixi-wang for spotting it, I missed some small bugs in refactoring evaluate_detections but it should only affect the single thread/process case, not the multiprocessing case :) |
Updated runtime number, which shows that running without parallel processing is really slow:
|
What changes are proposed in this pull request?
Changes to evaluate_detections with
map_samples
implementation and evaluation scriptHow is this patch tested? If it is not, please explain why.
python eval_detection_benchmark_script.py --workers 4,8
Release Notes
Is this a user-facing change that should be mentioned in the release notes?
notes for FiftyOne users.
(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)
What areas of FiftyOne does this PR affect?
fiftyone
Python library changes