-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference on video #3
Comments
This should be straightforward using the If not, it should be easy to modify: https://docs.opencv.org/3.4/dd/d43/tutorial_py_video_display.html Currently there isn't support for outputting a labelled video file, but that would be fairly straightforward with Let me know how you get on! |
Thanks for you answer ! |
You need to provide a dataset/names file ( The tflite file alone doesn't have any information about class names, it just returns an ID and this is by default mapped to COCO class names. |
Ok thanks
|
If you modify the |
Please verify your model with the official yolov5 repository and check that
you get the expected result (with your tflite export).
This seems like it's an issue with your model, not with this library - does
it work on a simpler image of a shark, for example?
…On Mon, 21 Feb 2022 at 15:40, constantinfite ***@***.***> wrote:
Yea it works thanks but look at my detection. The bounding box takes all
the image. The detection is not working.
[image: image]
<https://user-images.githubusercontent.com/57963890/154976564-43ff0e61-0b71-46de-8fc6-9dd27a3b43e6.png>
—
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYDMJ2SWAD3AX3255FUBPLU4JFHRANCNFSM5O6H75UQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
OK so this is with the main Ultralytics repository and when you run it
through the same image here you get rubbish?
That's strange - if you're happy to share your weights (the
non-edgetpu compiled one, then I can check if it's an issue with
the compilation) and that image I can take a look. I can maybe add
a debug mode that doesn't run the edgetpu model (it just runs tflite) to
confirm. Feel free to email me (my username at gmail) if you don't
want to share publicly.
By the way, super long inference time is normal on tflite CPU for some reason (fairly
sure it doesn't use the GPU at all). I'm not sure why it's so poorly optimised but I get
the same with my edge models. The same thing run on the Coral should be instant.
…On Tue, 22 Feb 2022 at 10:23, constantinfite ***@***.***> wrote:
So I exported my model using this command python export.py --weights
best-shark-yolov5s.pt --include tflite --int8.
I get a file best-shark-yolov5s-int8.tflite.
I run the detection on simple image but it takes very long time to do
inference on simple image 18 seconds !
The detection is working at least :
[image: image]
<https://user-images.githubusercontent.com/57963890/155101967-0f16754a-f557-4d07-9a04-a39466dc7ec6.png>
—
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAYDMJY54JSAHRVRPD4DMH3U4NIYVANCNFSM5O6H75UQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
It's my bad, I only run inference on coral with the tensorflow lite model. But I have to do the Edge TPU Compiler before running on the coral.
I try with the classic model yolov5s and the detection works great on the coral on a video. So It's my model the problem I think. The model was train on the yolov5-3.1 version so maybe it's deprecated |
I did the step of the Edge TPU Compiler on google cloud for my model and it works but when I run the detection on my coral with the edge model it has the same behaviour as previous : the detection is slow and the bounding box are not correct. |
Ok I'll take a look at the model you sent over when I get a chance. It's possible that the compilation for edgepu makes the model perform poorly? if we can't figure it out, you can also contact the EdgeTPU guys directly about this, they're generally quite helpful and can look at your input/output models. |
Hi I would like to know if it is possible to run inference on a video ?
Thanks
The text was updated successfully, but these errors were encountered: