Hi! Thanks for your nice work.
I was wondering if I could use your embeddings on the SNLI dataset. Unfortunately, the results are a bit worse than i've expected.
I think this is due to normalization (there is some information in the norm of i.e. glove embeddings, that is useful for Natural Language Inference), as you use normalization when loading the word vectors.
Do you think that it could make sense to use unnormalized wv?
Thanks!