You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This would help a lot with understanding and exploring how well ones model actually works. Being able to explore the data and seeing how many of the nodes jump around a lot (e.g. from mouse1 to mouse2) can help compare the models own internal metrics of performance with its actual performance. Additionally to using velocity of single keypoints the amount of NANs (not detected keypoints) in a given prediction output file could also help. (Keypoint moseq has sth like this implemented here: https://keypoint-moseq.readthedocs.io/en/latest/FAQs.html#high-proportion-of-nans )
We plan to use several heuristics as quality-control metrics for the predictions (and to detect outliers):
temporal smoothness (aka jumping too much from frame to frame)
pose plausibility (if the entire configuration of the body looks "weird")
multi-view consistency (if you have more than cameras, do their individual predictions agree with each other?)
We've borrowed these ideas from LightningPose, and they're described here. We haven't implemented these yet, but hopefully we'll start working on them soon.
Regarding the reporting of NaN values, we already have a function for that. I'd encourage you to read the two example below, to see this feature in action.
No description provided.
The text was updated successfully, but these errors were encountered: