Skip to content

Confusion matrix? #188

Jan 17, 2023 · 1 comments · 1 reply
Discussion options

You must be logged in to vote

Hey! It was meant to be more of a visual confirmation that everything is working as intended. So essentially like you state, to eyeball it. Unfortunately it is quite problematic to try to estimate exactly how well the training worked. At least in a real sense. Of course you can plot your own confusion matrix, the calculated accuracy and validation accuracy on the dataset - but that does not tell you if the IDs are actually being assigned correctly. It could also mean that the dataset is suboptimal (i.e. auto correlation), and you're overfitting to it. That's what the "uniqueness" was meant to mitigate at least in part. This is simply a measure of "of a subset of all frames, distributed ac…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@walsmanjason
Comment options

Answer selected by walsmanjason
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants