Skip to content

Releases: huggingface/optimum-intel

v1.11.1: Patch release

06 Nov 23:50
Compare
Choose a tag to compare

Full Changelog: v1.11.0...v1.11.1

v1.11.0: MPT, TIMM models, VAE image processor

08 Sep 13:16
Compare
Choose a tag to compare

OpenVINO

Neural Compressor

Full Changelog: https://github.com/huggingface/optimum-intel/commits/v1.11.0

v1.10.1: Patch release

26 Jul 15:00
Compare
Choose a tag to compare

v1.10.0: Stable Diffusion XL pipelines

25 Jul 16:09
Compare
Choose a tag to compare

Stable Diffusion XL

Enable SD XL OpenVINO export and inference for text-to-image and image-to-image tasks by @echarlaix in #377

from optimum.intel import OVStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-0.9"
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)

prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("openvino-sd-xl-base-0.9")

More examples in documentation

Full Changelog: v1.9.0...v1.10.0

v1.9.4: Patch release

20 Jul 22:45
Compare
Choose a tag to compare
  • Fix OVDataLoader for NNCF quantization aware training for transformers > v4.31.0 by @echarlaix in #376

Full Changelog: v1.9.3...v1.9.4

v1.9.3: Patch release

30 Jun 16:24
Compare
Choose a tag to compare

Full Changelog: v1.9.2...v1.9.3

v1.9.2: Patch release

26 Jun 22:31
Compare
Choose a tag to compare
  • Fix INC distillation to be compatible with neural-compressor v2.2.0 breaking changes by @echarlaix in #338

v1.9.1: Patch release

15 Jun 15:47
Compare
Choose a tag to compare
  • Fix inference for OpenVINO export for causal language models by @echarlaix in #351

v1.9.0: OpenVINO models improvements, TorchScript export, INC quantized SD pipeline

12 Jun 09:27
Compare
Choose a tag to compare

OpenVINO and NNCF

  • Ensure compatibility for OpenVINO v2023.0 by @jiwaszki in #265
  • Add Stable Diffusion quantization example by @AlexKoff88 in #294 #304 #326
  • Enable decoder quantized models export to leverage cache by @echarlaix in #303
  • Set height and width during inference for static models Stable Diffusion models by @echarlaix in #308
  • Set batch size to 1 by default for Wav2Vec2 for NNCF compatibility v2.5.0 @ljaljushkin in #312
  • Ensure compatibility for NNCF v2.5 by @ljaljushkin in #314
  • Fix OVModel for BLOOM architecture by @echarlaix in #340
  • Add SD OV model height and width attribute and fix export for torch>=v2.0.0 by @eaidova in #342

Intel Neural Compressor

  • Add TSModelForCausalLM to enable TorchScript export, loading and inference for causal lm models by @echarlaix in #283
  • Remove INC deprecated classes by @echarlaix in #293
  • Enable IPEX model inference for text generation task by @jiqing-feng in #227 #300
  • Add INCStableDiffusionPipeline to enable INC quantized Stable Diffusion model loading by @echarlaix in #305
  • Enable the possibility to provide a quantization function and not a calibration dataset during INC static PTQ by @PenghuiCheng in #309
  • Fix INCSeq2SeqTrainer evaluation step by @AbhishekSalian in #335
  • Fix INCSeq2SeqTrainer padding step by @echarlaix in #336

Full Changelog: https://github.com/huggingface/optimum-intel/commits/v1.9.0

v1.8.1: Patch release

01 Jun 17:33
Compare
Choose a tag to compare
  • Fix OpenVINO Trainer for transformers >= v4.29.0 by @echarlaix in #328

Full Changelog: v1.8.0...v1.8.1