You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Apologies if this can be already done, but I'm new to using fastembed and I was wondering if it would be possible to load the model files for a particular embedding model through a local user-defined directory instead of having to go though the huggingface download pipeline each time. Providing a local path directly currently throws an error since it does not appear in the list of supported models.
Is there any additional information you would like to provide?
A useful scenario for this is where we are pulling the model weights from elsewhere and running this in a Kubernetes pod. Since pods are ephemeral by nature, having them re-pull the model weights each time if they're not found in the cache seems tedious and unnecessary. I'd be happy to help with a PR towards this if required.
Apologies again if this is already possible and I somehow missed this! Thanks again for the awesome project!
The text was updated successfully, but these errors were encountered:
I can see that a duplicate issue #321 has already been raised for this, but I would like to keep this open because the new one seems inactive and without maintainer response.
@joein is this something the dev team would like some help with? I would be happy to volunteer my time and effort
What feature would you like to request?
Hi,
Apologies if this can be already done, but I'm new to using fastembed and I was wondering if it would be possible to load the model files for a particular embedding model through a local user-defined directory instead of having to go though the huggingface download pipeline each time. Providing a local path directly currently throws an error since it does not appear in the list of supported models.
Is there any additional information you would like to provide?
A useful scenario for this is where we are pulling the model weights from elsewhere and running this in a Kubernetes pod. Since pods are ephemeral by nature, having them re-pull the model weights each time if they're not found in the cache seems tedious and unnecessary. I'd be happy to help with a PR towards this if required.
Apologies again if this is already possible and I somehow missed this! Thanks again for the awesome project!
The text was updated successfully, but these errors were encountered: