Skip to content

Releases: huggingface/text-embeddings-inference

v1.8.2

09 Sep 14:45
d7af1fc
Compare
Choose a tag to compare

🔧 Fixed Intel MKL Support

Since Text Embeddings Inference (TEI) v1.7.0, Intel MKL support had been broken due to changes in the candle dependency. Neither static-linking nor dynamic-linking worked correctly, which caused models using Intel MKL on CPU to fail with errors such as: "Intel oneMKL ERROR: Parameter 13 was incorrect on entry to SGEMM".

Starting with v1.8.2, this issue has been resolved by fixing how the intel-mkl-src dependency is defined. Both features, static-linking and dynamic-linking (the default), now work correctly, ensuring that Intel MKL libraries are properly linked.

This issue occurred in the following scenarios:

  • Users installing text-embeddings-router via cargo with the --feature mkl flag. Although dynamic-linking should have been used, it was not working as intended.
  • Users relying on the CPU Dockerfile when running models without ONNX weights. In these cases, Safetensors weights were used with candle as backend (with MKL optimizations), instead of ort.

The following table shows the affected versions and containers:

Version Image
1.7.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.0
1.7.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.1
1.7.2 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.2
1.7.3 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.3
1.7.4 ghcr.io/huggingface/text-embeddings-inference:cpu-1.7.4
1.8.0 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.0
1.8.1 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1

More details: PR #715

Full Changelog: v1.8.1...v1.8.2

v1.8.1

04 Sep 15:22
0adb000
Compare
Choose a tag to compare
text-embeddings-inference-v1 8 1-embedding-gemma(1)

Today, Google releases EmbeddingGemma, a state-of-the-art multilingual embedding model perfect for on-device use cases. Designed for speed and efficiency, the model features a compact size of 308M parameters and a 2K context window, unlocking new possibilities for mobile RAG pipelines, agents, and more. EmbeddingGemma is trained to support over 100 languages and is the highest-ranking text-only multilingual embedding model under 500M on the Massive Text Embedding Benchmark (MTEB) at the time of writing.

  • CPU:
docker run -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1 \
    --model-id google/embeddinggemma-300m --dtype float32
  • CPU with ONNX Runtime:
docker run -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cpu-1.8.1 \
    --model-id onnx-community/embeddinggemma-300m-ONNX --dtype float32 --pooling mean
  • NVIDIA CUDA:
docker run --gpus all --shm-size 1g -p 8080:80 ghcr.io/huggingface/text-embeddings-inference:cuda-1.8.1 \
    --model-id google/embeddinggemma-300m --dtype float32

Notable Changes

  • Add support for Gemma3 (text-only) architecture
  • Intel updates to Synapse 1.21.3 and IPEX 2.8
  • Extend ONNX Runtime support in OrtRuntime
    • Support position_ids and past_key_values as inputs
    • Handle padding_side and pad_token_id

What's Changed

Full Changelog: v1.8.0...v1.8.1

v1.8.0

05 Aug 08:31
2bff275
Compare
Choose a tag to compare
text-embeddings-inference-v1 8 0(2)

Notable Changes

  • Qwen3 support for 0.6B, 4B and 8B on CPU, MPS, and FlashQwen3 on CUDA and Intel HPUs
  • NomicBert MoE support
  • JinaAI Re-Rankers V1 support
  • Matryoshka Representation Learning (MRL)
  • Dense layer module support (after pooling)

Note

Some of the aforementioned changes were released within the patch versions on top of v1.7.0, whilst both Matryoshka Representation Learning (MRL) and Dense layer module support have been recently included and were not released yet.

What's Changed

New Contributors

Full Changelog: v1.7.0...v1.8.0

v1.7.4

07 Jul 12:33
6e900af
Compare
Choose a tag to compare

Noticeable Changes

Qwen3 was not working fine on CPU / MPS when sending batched requests on FP16 precision, due to the FP32 minimum value downcast (now manually set to FP16 minimum value instead) leading to null values, as well as a missing to_dtype call on the attention_bias when working with batches.

What's Changed

Full Changelog: v1.7.3...v1.7.4

v1.7.3

30 Jun 10:54
fb80177
Compare
Choose a tag to compare

Noticeable Changes

Qwen3 support included for Intel HPU, and fixed for CPU / Metal / CUDA.

What's Changed

New Contributors

Full Changelog: v1.7.2...v1.7.3

v1.7.2

16 Jun 06:44
a69cc2e
Compare
Choose a tag to compare

Notable change

  • Added support for Qwen3 embeddigns

What's Changed

New Contributors

Full Changelog: v1.7.1...v1.7.2

v1.7.1

03 Jun 13:38
006e16b
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.7.0...v1.7.1

v1.7.0

08 Apr 11:54
72dac20
Compare
Choose a tag to compare

Notable changes

  • Upgrade dependencies heavily (candle 0.5 -> 0.8 and related)
  • Added ModernBert support by @kozistr !

What's Changed

New Contributors

Full Changelog: v1.6.1...v1.7.0

v1.6.1

28 Mar 08:47
875239e
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.6.0...v1.6.1

v1.6.0

13 Dec 15:52
57d8fc8
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.5.1...v1.6.0