Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError occurred with representation_model #1548

Open
JINHXu opened this issue Sep 26, 2023 · 2 comments
Open

ValueError occurred with representation_model #1548

JINHXu opened this issue Sep 26, 2023 · 2 comments

Comments

@JINHXu
Copy link

JINHXu commented Sep 26, 2023

Hello,

the following error occurred when I attempt to obtain sentence embeddings for a list of sentences with combine_strategy=None:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[12], line 2
      1 # features = model.encode_sentences(batch, combine_strategy="mean")
----> 2 features = model.encode_sentences(batch, combine_strategy=None)

File ~/.local/lib/python3.9/site-packages/simpletransformers/language_representation/representation_model.py:219, in RepresentationModel.encode_sentences(self, text_list, combine_strategy, batch_size)
    214             token_vectors = self.model(
    215                 input_ids=encoded["input_ids"].to(self.device),
    216                 attention_mask=encoded["attention_mask"].to(self.device),
    217             )
    218     embeddings.append(embedding_func(token_vectors).cpu().detach().numpy())
--> 219 embeddings = np.concatenate(embeddings, axis=0)
    221 return embeddings

File <__array_function__ internals>:180, in concatenate(*args, **kwargs)

ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 512 and the array at index 823 has size 150

while this error has not been witnessed with combine_strategy="mean", should this have been a bug?

(I am setting combine_strategy=None in order to obtain [CLS] embedding)

Thank you,
Xu

@GathsahDai
Copy link

Hello, I've encountered this issue as well. May I ask if you have found a solution?

@GathsahDai
Copy link

I believe I have figured out the reason for the dimension misalignment issue. It occurs because when the combine_strategy =None, what we get are the embeddings for each token. When the number of tokens obtained from each sentence is inconsistent, it leads to the dimension misalignment problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants