fashion-clip-rs is the onnx ready version of fashion-clip transformers model entirely written in Rust with the help of pykeio/ort. It imports an ONNX file (at the moment, the Fashion-Clip PyTorch library from Hugging Face with an optimum CLI to convert it to ONNX format), creates a gRPC service API to create either text or image embeddings using the Fashion-Clip model and clip-ViT-B-32-multilingual-v1, runs inference for the given text or image, and returns the output vectors as a gRPC response.
fashion-clip-rs provides highly efficient text and image embeddings especially for fashion with multilingual capability.
This project can be used as a standalone library to include rust projects.
- Entirely in Rust: Re-written for optimal performance.
- GRPC with Tonic: Robust and efficient GRPC service.
- Multilingual Text Embedding: Utilizing ONNX converted
sentence-transformers/clip-ViT-B-32-multilingual-v1
. - Fashion-Focused Image Embedding: With ONNX converted
patrickjohncyh/fashion-clip
. - Cargo for Package Management: Ensuring reliable dependency management.
- Built-in Rust Testing: Leveraging Rust's testing capabilities.
- GRPC Performance Testing: With
ghz.sh
. - Docker Support: For containerized deployment.
- ONNX Runtime with
pykeio/ort
Crate: For model loading and inference. - HF Tokenizers: For preprocessing in text embedding.
- Standalone Library Support: Can be included in other Rust projects.
- Coverage with Tarpaulin: For detailed test coverage analysis.
Ensure you have the following installed:
- Recent version of Rust
- Just
- Docker
- ghz for GRPC performance testing
- Tarpaulin for coverage reporting
- python >3.11 to export onnx model using hf optimum
- act(optional for testing github actions on local)
- Install Rust and Cargo: https://www.rust-lang.org/tools/install
- Install Just
- Install Tarpaulin Optional: for coverage reports
- Install act Optional for testing github actions on local
- Install ghz Optional: for performance testing
- Clone the repository:
git clone https://github.com/yaman/fashion-clip-rs.git
- Change into the project directory:
cd fashion-clip-rs
- Build the project:
just build
To use the Fashion-Clip model and clip-ViT-B-32-multilingual-v1 with fashion-clip-rs, you need to convert it to ONNX format using the Hugging Face Optimum tool.
- install latest optimum cli from source with transformers and sentence-transformers:
python -m pip install optimum[onnxruntime]@git+https://github.com/huggingface/optimum.git transformers sentence-transformers
- For clip-ViT-B-32-multilingual-v1:
optimum-cli export onnx -m sentence-transformers/clip-ViT-B-32-multilingual-v1 --task feature-extraction models/text
- For fashion-clip:
optimum-cli export onnx -m patrickjohncyh/fashion-clip --task feature-extraction models/image
Note 1: Accurate exporting of clip-ViT-B-32-multilingual-v1 depends on latest version of optimum. So, do not skip first step even if you have already optimum installed
Note 2: At the moment, we are using clip-ViT-B-32-multilingual-v1 to generate text embeddings. fashion-clip to generate image embeddings.
just build
just build-docker
just run
just run-docker
just unit-test
just integration-test
just coverage
just perf-test-for-text
Github action pushes to yaman/fashion-clip-rs docker hub repo everytime a change on necessary files happens. Linux/amd64 and Linux/arm64 images will be created. You can directly run image via:
docker run -v ./models:/models -v ./config.toml:/config.toml yaman/fashion-clip-rs:latest
fashion-clip-rs can also be used as a library in Rust projects.
Note: models must be ready under models/text and models/image directories. Check Model Export section
Add library to your project:
cargo add fashion_clip_rs
given model is exported to onnx with following model structure under models/text:
config.json
model.onnx
special_tokens_map.json
tokenizer_config.json
tokenizer.json
vocab.txt
use fashion_clip_rs::{config::Config, embed::EmbedText};
let embed_text = EmbedText::new(&"models/text/model.onnx", &"sentence-transformers/clip-ViT-B-32-multilingual-v1").expect("msg");
let query_embedding = embed_text.encode(&"this is a sentence".to_string());
The gRPC service provides two methods:
Encodes a text input using the Fashion-Clip model.
Request:
message TextRequest {
string text = 1;
}
Response:
message EncoderResponse {
repeated float embedding = 3;
}
Encodes an image input using the Fashion-Clip model.
Request:
message ImageRequest {
bytes image = 2;
}
Response:
message EncoderResponse {
repeated float embedding = 3;
}
- Fork the repository
- Create a new branch:
git checkout -b feature-name
- Make your changes and commit them:
git commit -am 'Add some feature'
- Push to the branch:
git push origin feature-name
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE.md file for details.
For questions or feedback, please reach out to yaman.
This project was created by Yaman.