-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why Lightning Pose Extract Black and white frames from colored video #81
Comments
@ReetKaur15 this shouldn't make a difference for training and inference unless you are tracking a keypoint that is associated with a specific color. Nevertheless, I'll look into where the color -> black+white conversion happens. So far all of the videos we've worked with have been grayscale. |
Thanks @themattinthehatt |
Hi @themattinthehatt , Does lightning pose perform any post-filtering process after predictions as done by DeepLabCut? |
No, we do not. If you want to use one of the simple filters used by DLC I'd recommend a median filter over the ARIMA model. For a better performing (though more computationally-intensive) filtering algorithm you should check out our Ensemble Kalman Smoother (EKS) method (see Fig 5 in the preprint). This is actually built into the app now - if you'd like to give it a try you'll need to first update the app by doing the following:
Next time you launch the app, you'll see some changes in the Train/Infer tab; see the docs for how to run EKS inside the app. |
Hi Team,
I extracted video frames using LightningPose App. I am interested in knowing
" Why does LightningPose extract Black and White frames from colored video? Does it make any difference in training model and performance of model when tested on colored video ?"
Thanks in advance
The text was updated successfully, but these errors were encountered: