-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
predict scores is lower than evaluate score #1055
Comments
Hi. Maybe you filtered out the predictions with a low score (below 0.5). In the evaluation, the score threshold is 0.05 by default. Also, make sure you feed the images in the BGR format to the model. |
@jsemric has a good point, the default score threshold for evaluation is 0.05, for the example notebook it is set to 0.5. This could explain the difference you're seeing. |
Any update here? |
Hello, |
Thanks for letting us know @ikerodl96 , could be related to #647 . I'm assuming the original issue is resolved though. |
Hi @ikerodl96, @hgaiser, So I was comparing the codes: I get to see this when running |
@mariaculman18 instead of running |
@hgaiser @ikerodl96 it worked! Thank you :) What I did in
Then I ran from my folder: I am attaching the modified script. |
Hi!
I have successfully train and evaluate model.
My evaluate stats:
So precisions is "1.0000" (i am evaluate on same train data)
But when i run predict from this example:
https://github.com/fizyr/keras-retinanet/blob/master/examples/ResNet50RetinaNet.ipynb
I receive smaller precisions (e.g. "score": 0.905562162399292) on same images.
Why predict scores differs from evaluate scores?
Thanks.
The text was updated successfully, but these errors were encountered: