Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fine-tuning the pre-trained model on custom dataset #18

Open
marielvanstav opened this issue Apr 19, 2018 · 0 comments
Open

Fine-tuning the pre-trained model on custom dataset #18

marielvanstav opened this issue Apr 19, 2018 · 0 comments

Comments

@marielvanstav
Copy link

Hi there! I would like to fine-tune the pre-trained model on a custom dataset. If I understand correctly, I have to train a new word2vec model and skip-instructions model on the custom dataset. Using these models, I create the custom HDF5 file. Then, I can fine-tune the pre-trained model by training it on the custom HDF5 file with the parameter -finetune 1.

Is this the correct way to do this? I’m asking because I’m not sure if I should train the word2vec and skip-instruction model on the custom dataset only, or if these should be trained on a concatenation of the Recipe1M and the custom dataset. Additionally, the main.lua file contains opts.finetune = opts.finetune ~= 0, but I have not been able to figure out how this parameter is used during training. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant