-
Notifications
You must be signed in to change notification settings - Fork 903
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How is accuracy measured? #132
Comments
Did you use transfer learning? how are you measuring the accuracy? |
@zzh8829 Yes, I used transfer learning(darknet). On the modile. compile function i put metrics = ['accuracy']. I also tested the mAP with a different repo and I got an AP of 70 percent with an iou of 0.3. |
Hi, |
@asquare92 I don't believe the accuracy metric gives a good measure of the actual performance of the model. This is because Nonmax suppression is not applied during training. Also, I am not sure that the built-in accuracy metric does the same calculations as mAP. So I used a separate repo. Yes, I used the trained weights. |
@Robin2091 Thanks. |
@asquare92 I loaded the model with the trained weights. I just looped through a set of images and ran the detection on the images. I saved the bounding box information in a text file. Yes, use your annotations as ground truth files. |
Hello,
I was wondering how the accuracy of the model is measured and how it differs from mAP. I am training with yolov3 tiny and I get only a 30 percent accuracy for "yolo_0_output".
I fixed my dataset, got more data and then I was able to get a 45 percent accuracy. However, throughout training, the accuracy value just fluctuated between 40-47 percent and I didn't see any significant improvement in the accuracy. I then retrained again, and I was back to a 30 percent accuracy for some reason. So I don't understand why my accuracy value is fluctuating so much and why I am getting different accuracies when I trained on 2 separate occasions with the same dataset. Also, I measured the mAP value using another repo and despite the differences in accuracy, I still get around the same mAP value of 65-68 percent.
If someone can shed some light on how accuracy is measured and why I am seeing fluctuating accuracies it would be really helpful.
Thank you
Edit: Just need to add that the first time I trained on my fix dataset(45 percent accuracy) I had to iou at 0.3 but the second time I trained I had the iou at 0.1. I dont know if this affects the accuracy of the model though.
The text was updated successfully, but these errors were encountered: