You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am interested in your solution. I have few questions:
Can we integrate our custom tflite object detection model (such as trained using google's automl)?
If yes, How many FPS we may get on Raspberry Pi 4b using our custom trained tflite model (using automl) integrated with your GPIO based software.
Currently, as per your documentation, trigger mechanism checks some criteria before raising it such as min probability of 50%, area occupied, motion etc..., Are these criteria customizable ?
Currently trigger mechanism decides to raise the trigger based on some rules for every image. Is it possible, to add some additional criteria which will raise the trigger only iff it detects in more than certain number of frames (say 50%) of last 5 (or 10 or 20..) frames... this will increase the robustness of the trigger and will reduce the false positives.
The text was updated successfully, but these errors were encountered:
I'm using the ncnn framework running a Yolo deviate, wrapped in a lib. You can use a tailor made TF-lite, sure.
Using flaoting points, most networks run at 2-5 FPS on a Raspberry Pi 4. Integers are much faster. Up to 20 FPS is possible.
Criteria are all defined in a setting file. You can change them whenever you like.
That's possible.
Do keep in mind, you have to program the above modifications yourself. All C++ code is provide on the image.
Especially the integration of TF-Lite can be cumbersome.
Hi,
I am interested in your solution. I have few questions:
The text was updated successfully, but these errors were encountered: