-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--nce #20
Comments
Hi Tony,
Yes
Here we use single-layer RNN so drop-out config is dismissed. Should be fixed later
Something must be wrong
Since the loss criterion is different during NCE training and evalutation, which is (NCE vs Cross-Entropy). The training PPL is just the perplexity between the noise samples and positive samples, it should be, by definition, lower than real Perplexity within the whole vocabulary. Looking forward to further discussion! |
Hey, thanks for getting back to me so quickly. I'm not really concerned about argparse or dropout issues. This is the best public code for NCE in Language Modelling I could find, that's a great achievement. Zero perplexities is not something I can easily look into, and is quite a blocker for someone like me just starting with the code. I can help with the reported PPL under NCE. Firstly, for large tasks, NCE will self-normalise. That is \sum exp(x_i) will be about 1. When this happens you can report approx standard perplexity during training (dev/test sets are much smaller, it's good to report exact PPL by normalising). It has been ten years since I really got into this, I hope I haven't forgotten too much. |
README.md refers to option
--nce
, for examplepython main.py --cuda --noise-ratio 10 --norm-term 9 --nce --train
example/utils.py
does not have--nce
insetup_parser()
Result:
It looks like
--nce
has been replaced by--loss nce
and README.md should be updated.It's not clear that the rest of the code still works. This has two issues:
firstly is the warning from rnn.py, secondly the perplexities are all zero.
Moving on to NCE, the reported train PPL is very low, the valid PPL very high.
The text was updated successfully, but these errors were encountered: