Skip to content

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"

License

Notifications You must be signed in to change notification settings

intel/auto-round

AutoRound

Advanced Quantization Algorithm for LLMs

python version license

AutoRound is an advanced quantization algorithm for low-bits LLM inference. It's tailored for a wide range of models. Our method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, which competes impressively against recent methods without introducing any additional inference overhead and keeping low tuning cost. The below image presents an overview of AutoRound. Check out our paper on arxiv for more details and visit low_bit_open_llm_leaderboard for more accuracy data across various models.

What's New

  • [2024/08] AutoRound format supports Intel Gaudi2 devices. For an example, please refer to Intel/Qwen2-7B-int4-inc.
  • [2024/08] AutoRound includes several experimental features, e.g., activation quantization, mx_fp data type, and fast tuning of norm/bias parameters.
  • [2024/07] Important change: the default value of nsamples has been changed from 512 to 128 to reduce the memory usages, which may cause a slight accuracy drop in some scenarios
  • [2024/06] AutoRound format supports mixed bit-widths and group sizes for inference, resolving the significant performance drop issue with the asymmetric kernel
  • [2024/05] AutoRound supports lm-head quantization, saving 0.7G for LLaMA3-8B at W4G128.

Prerequisites

  • Python 3.9 or higher

Installation

Build from Source

pip install -vvv --no-build-isolation -e .
or
pip install -r requirements.txt
python setup.py install

Install from pypi

pip install auto-round

Model quantization

Gaudi2/ CPU/ GPU

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "facebook/opt-125m"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

from auto_round import AutoRound

bits, group_size, sym = 4, 128, False
autoround = AutoRound(model, tokenizer, bits=bits, group_size=group_size, sym=sym)
autoround.quantize()
output_dir = "./tmp_autoround"
autoround.save_quantized(output_dir)  ##save_quantized(output_dir,format="auto_gptq")
Detailed Hyperparameters
  • model: The PyTorch model to be quantized.

  • tokenizer: An optional tokenizer for processing input data. If none, a dataset must be provided.

  • bits (int): Number of bits for quantization (default is 4).

  • group_size (int): Size of the quantization group (default is 128).

  • sym (bool): Whether to use symmetric quantization (default is False).

  • enable_quanted_input (bool): Whether to use the output of the previous quantized block as the input for the current block for tuning (default is True).

  • enable_minmax_tuning (bool): Whether to enable weight min-max tuning (default is True).

  • iters (int): Number of tuning iterations (default is 200).

  • lr (float): The learning rate for rounding value (default is None, it will be set to 1.0/iters automatically).

  • minmax_lr (float): The learning rate for min-max tuning (default is None, it will be set to lr automatically).

  • nsamples (int): Number of samples for tuning (default is 128).

  • seqlen (int): Data length of the sequence for tuning (default is 2048).

  • batch_size (int): Batch size for training (default is 8).

  • scale_dtype (str): The data type of quantization scale to be used (default is "float16"), different kernels have different choices.

  • amp (bool): Whether to use automatic mixed precision (default is True).

  • nblocks (int): Packing several blocks as one for tuning together (default is 1).

  • gradient_accumulate_steps (int): Number of gradient accumulation steps (default is 1).

  • low_gpu_mem_usage (bool): Whether to save GPU memory at the cost of ~20% more tuning time (default is False).

  • dataset Union[str, list, tuple, torch.utils.data.DataLoader]: The dataset name for tuning (default is " NeelNanda/pile-10k"). Local json file and combination of datasets have been supported, e.g. " ./tmp.json,NeelNanda/pile-10k:train, mbpp:train+validation+test"

  • layer_config (dict): Configuration for weight quantization (default is an empty dictionary), mainly for mixed bits or mixed precision.

  • device: The device to be used for tuning. The default is set to 'auto', allowing for automatic detection.

Tips

1 Consider increasing 'iters' (e.g. 1000) to achieve better results, albeit with increased tuning time.

2 Consider increasing 'nsamples' (e.g. 512) to achieve better results, albeit with more memory(~20G).

3 Setting 'minmax_lr' to 2.0/iters has been observed to occasionally yield improved results.

Model inference

Please run the quantization code first

AutoRound format

cuda: git clone https://github.com/intel/auto-round.git && cd auto-round && pip install -vvv --no-build-isolation -e .

cpu:

  • option 1: pip install auto-round && pip install intel-extension-for-transformers
  • option 2: git clone https://github.com/intel/auto-round.git && cd auto-round && pip install -vvv --no-build-isolation -e .

hpu: docker image with Gaudi Software Stack is recommended. More details can be found in Gaudi Guide.

Gaudi2/ CPU/ GPU

from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_round import AutoRoundConfig

device = "auto"  ##cpu, hpu, cuda
quantization_config = AutoRoundConfig(
    backend=device
)
quantized_model_path = "./tmp_autoround"
model = AutoModelForCausalLM.from_pretrained(quantized_model_path,
                                             device_map=device, quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_path)
text = "There is a girl who likes adventure,"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
print(tokenizer.decode(model.generate(**inputs, max_new_tokens=50)[0]))

AutoGPTQ/AutoAWQ format

1 Please save the quantized model by modifying the code as follows: autoround.save_quantized(output_dir, format="auto_gptq") or autoround.save_quantized(output_dir, format="auto_awq").

2 Refer to their repositories to infer the model.

Support List

AutoRound supports basically all the major large language models.

Two main model export formats are provided: 'autoround' and 'autogptq'. The AutoRound format supports a wider range of devices, while the autogptq format is highly compatible and enjoys strong support within the community but may have accuracy issue for asym configuration.

Please note that an asterisk (*) indicates third-party quantized models, which may lack accuracy data and use a different recipe. We greatly appreciate their efforts and encourage more users to share their models, as we cannot release most of the models ourselves.

Model Supported
meta-llama/Meta-Llama-3.1-70B-Instruct recipe
meta-llama/Meta-Llama-3.1-8B-Instruct model-kaitchup-autogptq-int4*, model-kaitchup-autogptq-sym-int4*, recipe
meta-llama/Meta-Llama-3.1-8B model-kaitchup-autogptq-sym-int4*
Qwen/Qwen-VL accuracy, recipe
Qwen/Qwen2-7B model-autoround-int4
Qwen/Qwen2-57B-A14B-Instruct model-autoround-int4
01-ai/Yi-1.5-9B model-LnL-AI-autogptq-int4*
01-ai/Yi-1.5-9B-Chat model-LnL-AI-autogptq-int4*
Intel/neural-chat-7b-v3-3 model-autogptq-int4
Intel/neural-chat-7b-v3-1 model-autogptq-int4
TinyLlama-1.1B-intermediate model-LnL-AI-autogptq-int4*
mistralai/Mistral-7B-v0.1 model-autogptq-lmhead-int4, model-autogptq-int4
google/gemma-2b model-autogptq-int4
tiiuae/falcon-7b model-autogptq-int4-G64
sapienzanlp/modello-italia-9b model-fbaldassarri-autogptq-int4*
microsoft/phi-2 model-autogptq-sym-int4
microsoft/Phi-3.5-mini-instruct model-kaitchup-autogptq-sym-int4*
microsoft/Phi-3-vision-128k-instruct recipe
mistralai/Mistral-7B-Instruct-v0.2 accuracy, recipe, example
mistralai/Mixtral-8x7B-Instruct-v0.1 accuracy, recipe, example
mistralai/Mixtral-8x7B-v0.1 accuracy, recipe, example
meta-llama/Meta-Llama-3-8B-Instruct accuracy, recipe, example
google/gemma-7b accuracy, recipe, example
meta-llama/Llama-2-7b-chat-hf accuracy, recipe, example
Qwen/Qwen1.5-7B-Chat accuracy, sym recipe, asym recipe , example
baichuan-inc/Baichuan2-7B-Chat accuracy, recipe, example
01-ai/Yi-6B-Chat accuracy, recipe, example
facebook/opt-2.7b accuracy, recipe, example
bigscience/bloom-3b accuracy, recipe, example
EleutherAI/gpt-j-6b accuracy, recipe, example

Reference

If you find AutoRound useful for your research, please cite our paper:

@article{cheng2023optimize,
  title={Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs},
  author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi},
  journal={arXiv preprint arXiv:2309.05516},
  year={2023}
}