You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In sentence transformers, we have the option of using vision transformer models, and it would be useful to be able to use these with setfit -- while I appreciate from #107 that setfit is designed to work with text, there's no reason for the technique to not work with vision embedding models, and the only thing that stops one from using the library for this task is that the ModelCardCallback run various functions at init time that assumes the input dataset is text (e.g. sort by length, word counts etc). By commenting out the ModelCardCallback I managed to train a setfit model based clip-ViT-L-14.
It would be great if the ModelCardCallback setup can be updated to support other modalities, or if that is not on the roadmap, at least add in the option of excluding that callback completely.
The text was updated successfully, but these errors were encountered:
In sentence transformers, we have the option of using vision transformer models, and it would be useful to be able to use these with setfit -- while I appreciate from #107 that setfit is designed to work with text, there's no reason for the technique to not work with vision embedding models, and the only thing that stops one from using the library for this task is that the ModelCardCallback run various functions at init time that assumes the input dataset is text (e.g. sort by length, word counts etc). By commenting out the ModelCardCallback I managed to train a setfit model based clip-ViT-L-14.
It would be great if the ModelCardCallback setup can be updated to support other modalities, or if that is not on the roadmap, at least add in the option of excluding that callback completely.
The text was updated successfully, but these errors were encountered: