You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank your for the great repository.
I just want to confirm, in the colab that you gave, the evaluation and test sets are from the same.
It is intended for demo only, right? Since the test set is included in the training process (as eval_dataset)
it is not a big surprise that the performance was high.
The text was updated successfully, but these errors were encountered:
@bagustris was thinking about the same. But seems like he splits the data and makes 20% for test.
In my case, I have train_dataset, validation (dev_dataset), and evaluation (test_dataset). At the end, in the colab there's this:
Then, to validate the model on the test set, I should run set the CTCTrainer again and change the eval_dataset param to eval_set=test_dataset? and then hit trainer.train() again, right? @m3hrdadfi
Hi @m3hrdadfi,
Thank your for the great repository.
I just want to confirm, in the colab that you gave, the evaluation and test sets are from the same.
It is intended for demo only, right? Since the test set is included in the training process (as
eval_dataset
)it is not a big surprise that the performance was high.
The text was updated successfully, but these errors were encountered: