Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed switched token_type_ids and attention_mask #412

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rolshoven
Copy link

I was having the same error as mentioned in #338 where I could not export my model with model_base stsb-xlm-roberta-base. After some debugging, I noticed that the attention_mask and token_type_ids were switched in the function forward (line 50) in setfit/exporters/onnx.py. The error then occurs because we are trying to look up both the token_type_id embedding with index 0 and the one with index 1, but there is only one embedding in the matrix. I believe that this did not happen with other model bases because they have more than two token_type embeddings.

However, I must confess that I was not yet able to test this fix with other models that previously worked. We should definitely do this before we merge this code. To make the code safer, I also made us of kwargs when calling self.model_body instead of positional arguments. In my case, I was able to export the model after this small fix.

@rolshoven rolshoven marked this pull request as draft August 28, 2023 07:46
@rolshoven
Copy link
Author

Just noticed that the exported onnx model does only work if we switch the attention_mask and token_type_ids of the generated dcitionary after tokenization, which is probably caused by my change. I will investigate further and report back soon.

@rolshoven
Copy link
Author

I reverted my previous changes and implemented the fix directly in function export_onnx_setfit_model of src/setfit/exporters/onnx.py. The problem was that the tokenizer dictionary keys were not ordered. I implemented a generic solution that uses the signature function from module inspect to make as few assumptions on the parameters as possible while ensuring the correct order of the input values.

@rolshoven rolshoven marked this pull request as ready for review September 18, 2023 06:41
@rolshoven
Copy link
Author

rolshoven commented Sep 18, 2023

I forgot to include the import but now the tests should work

Edit: okay, there are still errors, I'll analyse and address them soon!

@rolshoven rolshoven marked this pull request as draft September 18, 2023 08:29
@andreeapricopi
Copy link

andreeapricopi commented Sep 25, 2023

Tested with distiluse-base-multilingual-cased-v2, which leads to: ValueError: tuple.index(x): x not in tuple in setfit/exporters/onnx.py: 94 in <lambda>

Code snippet for reproductibility:

from setfit import SetFitModel
from sentence_transformers import SentenceTransformer
from setfit import SetFitHead, SetFitHead, SetFitModel
from setfit.exporters.onnx import export_onnx

model_id = "sentence-transformers/distiluse-base-multilingual-cased-v2" 

model_body = SentenceTransformer(model_id)
model_head = SetFitHead(in_features = model_body.get_sentence_embedding_dimension(), out_features = 4)
model = SetFitModel(model_body = model_body, model_head = model_head)

export_onnx(model.model_body,
            model.model_head,
            opset=12,
            output_path="dummy_path")
            

@tomaarsen
Copy link
Member

Perhaps we can adopt the approach from #435 for ONNX, rather than sticking with the current export_onnx. It seems more consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants