You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some weights of the model checkpoint at /home/rnd/wmj/instructor-large/instructor-embedding/output/checkpoint-6500/ were not used when initializing T5EncoderModel: ['2.linear.weight'] - This IS expected if you are initializing T5EncoderModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5EncoderModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). max_seq_length 512
#117
Open
EricPaul03 opened this issue
May 11, 2024
· 0 comments
Hello, first of all, thank you very much for your excellent open-source code. However, I encountered some problems while using it. I used the provided Instructor-large model weight finetune to access our data, but in the output checkpoint, I first found that its checkpoint folder is not a loadable model file (it is missing multiple files, such as the 1_Pooling, 2_Dense folder, config. json). So, I copied these files from the downloaded Instructor-large folder, but I still got the following error. How should I use the finetune checkpoint?
Some weights of the model checkpoint at /home/rnd/wmj/instructor-large/instructor-embedding/output/checkpoint-6500/ were not used when initializing T5EncoderModel: ['2.linear.weight']
This IS expected if you are initializing T5EncoderModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing T5EncoderModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
max_seq_length 512
The text was updated successfully, but these errors were encountered:
Hello, first of all, thank you very much for your excellent open-source code. However, I encountered some problems while using it. I used the provided Instructor-large model weight finetune to access our data, but in the output checkpoint, I first found that its checkpoint folder is not a loadable model file (it is missing multiple files, such as the 1_Pooling, 2_Dense folder, config. json). So, I copied these files from the downloaded Instructor-large folder, but I still got the following error. How should I use the finetune checkpoint?
Some weights of the model checkpoint at /home/rnd/wmj/instructor-large/instructor-embedding/output/checkpoint-6500/ were not used when initializing T5EncoderModel: ['2.linear.weight']
max_seq_length 512
The text was updated successfully, but these errors were encountered: