Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is accuracy measured? #132

Open
Robin2091 opened this issue Dec 25, 2019 · 6 comments
Open

How is accuracy measured? #132

Robin2091 opened this issue Dec 25, 2019 · 6 comments

Comments

@Robin2091
Copy link

Robin2091 commented Dec 25, 2019

Hello,

I was wondering how the accuracy of the model is measured and how it differs from mAP. I am training with yolov3 tiny and I get only a 30 percent accuracy for "yolo_0_output".

I fixed my dataset, got more data and then I was able to get a 45 percent accuracy. However, throughout training, the accuracy value just fluctuated between 40-47 percent and I didn't see any significant improvement in the accuracy. I then retrained again, and I was back to a 30 percent accuracy for some reason. So I don't understand why my accuracy value is fluctuating so much and why I am getting different accuracies when I trained on 2 separate occasions with the same dataset. Also, I measured the mAP value using another repo and despite the differences in accuracy, I still get around the same mAP value of 65-68 percent.

If someone can shed some light on how accuracy is measured and why I am seeing fluctuating accuracies it would be really helpful.

Thank you

Edit: Just need to add that the first time I trained on my fix dataset(45 percent accuracy) I had to iou at 0.3 but the second time I trained I had the iou at 0.1. I dont know if this affects the accuracy of the model though.

@zzh8829
Copy link
Owner

zzh8829 commented Dec 27, 2019

Did you use transfer learning? how are you measuring the accuracy?

@Robin2091
Copy link
Author

Robin2091 commented Dec 27, 2019

@zzh8829 Yes, I used transfer learning(darknet). On the modile. compile function i put metrics = ['accuracy']. I also tested the mAP with a different repo and I got an AP of 70 percent with an iou of 0.3.
my code:
image

@imAhmadAsghar
Copy link

imAhmadAsghar commented Feb 27, 2020

@zzh8829 Yes, I used transfer learning(darknet). On the modile. compile function i put metrics = ['accuracy']. I also tested the mAP with a different repo and I got an AP of 70 percent with an iou of 0.3.
my code:
image

Hi,
Can you please share from which other repo did you calculate the mAP and how?
Did you use the trained weights or something?

@Robin2091
Copy link
Author

Robin2091 commented Feb 27, 2020

@asquare92 I don't believe the accuracy metric gives a good measure of the actual performance of the model. This is because Nonmax suppression is not applied during training. Also, I am not sure that the built-in accuracy metric does the same calculations as mAP. So I used a separate repo. Yes, I used the trained weights.

https://github.com/rafaelpadilla/Object-Detection-Metrics

@imAhmadAsghar
Copy link

@asquare92 I don't believe the accuracy metric gives a good measure of the actual performance of the model. This is because Nonmax suppression is not applied during training. Also, I am not sure that the built-in accuracy metric does the same calculations as mAP. So I used a separate repo.

https://github.com/rafaelpadilla/Object-Detection-Metrics

@Robin2091 Thanks.
Just a small question: How did you make the detection files after training the model using this repo?
And for the ground truth files, I just need to use my annotations?

@Robin2091
Copy link
Author

Robin2091 commented Feb 27, 2020

@asquare92 I loaded the model with the trained weights. I just looped through a set of images and ran the detection on the images. I saved the bounding box information in a text file. Yes, use your annotations as ground truth files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants