All benchmarks are wrong, some will cost you less than others.
Optimum-Benchmark is a unified multi-backend & multi-device utility for benchmarking Transformers, Diffusers, PEFT, TIMM and Optimum libraries, along with all their supported optimizations & quantization schemes, for inference & training, in distributed & non-distributed settings, in the most correct, efficient and scalable way possible.
News π°
- LlamaCpp backend for benchmarking
llama-cpp-python
bindings with all its supported devices π - π₯³ PyPI package is now available for installation:
pip install optimum-benchmark
π check it out ! - Model loading latency/memory/energy tracking for all backends in the inference scenario π
- numactl support for Process and Torchrun launchers to control the NUMA nodes on which the benchmark runs.
- 4 minimal docker images (
cpu
,cuda
,rocm
,cuda-ort
) in packages for testing, benchmarking and reproducibility π³ - vLLM backend for benchmarking vLLM's inference engine π
- Hosting the codebase of the LLM-Perf Leaderboard π₯
- Py-TXI backend for benchmarking Py-TXI π
- Python API for running isolated and distributed benchmarks with Python scripts π
- Simpler CLI interface for running benchmarks (runs and sweeps) using the Hydra π§ͺ
Motivations π―
- HuggingFace hardware partners wanting to know how their hardware performs compared to another hardware on the same models.
- HuggingFace ecosystem users wanting to know how their chosen model performs in terms of latency, throughput, memory usage, energy consumption, etc compared to another model.
- Benchmarking hardware & backend specific optimizations & quantization schemes that can be applied to models and improve their computational/memory/energy efficiency.
Β
Note
Optimum-Benchmark is a work in progress and is not yet ready for production use, but we're working hard to make it so. Please keep an eye on the project and help us improve it and make it more useful for the community. We're looking forward to your feedback and contributions. π Β
Optimum-Benchmark is continuously and intensively tested on a variety of devices, backends, scenarios and launchers to ensure its stability with over 300 tests running on every PR (you can request more tests if you want to).
You can install the latest released version of optimum-benchmark
on PyPI:
pip install optimum-benchmark
or you can install the latest version from the main branch on GitHub:
pip install git+https://github.com/huggingface/optimum-benchmark.git
or if you want to tinker with the code, you can clone the repository and install it in editable mode:
git clone https://github.com/huggingface/optimum-benchmark.git
cd optimum-benchmark
pip install -e .
Advanced install options
Depending on the backends you want to use, you can install optimum-benchmark
with the following extras:
- PyTorch (default):
pip install optimum-benchmark
- OpenVINO:
pip install optimum-benchmark[openvino]
- Torch-ORT:
pip install optimum-benchmark[torch-ort]
- OnnxRuntime:
pip install optimum-benchmark[onnxruntime]
- TensorRT-LLM:
pip install optimum-benchmark[tensorrt-llm]
- OnnxRuntime-GPU:
pip install optimum-benchmark[onnxruntime-gpu]
- Neural Compressor:
pip install optimum-benchmark[neural-compressor]
- Py-TXI:
pip install optimum-benchmark[py-txi]
- IPEX:
pip install optimum-benchmark[ipex]
- vLLM:
pip install optimum-benchmark[vllm]
We also support the following extra extra dependencies:
- autoawq
- auto-gptq
- sentence-transformers
- bitsandbytes
- codecarbon
- flash-attn
- deepspeed
- diffusers
- timm
- peft
You can run benchmarks from the Python API, using the Benchmark
class and its launch
method. It takes a BenchmarkConfig
object as input, runs the benchmark in an isolated process and returns a BenchmarkReport
object containing the benchmark results.
Here's an example of how to run an isolated benchmark using the pytorch
backend, torchrun
launcher and inference
scenario with latency and memory tracking enabled.
from optimum_benchmark import Benchmark, BenchmarkConfig, TorchrunConfig, InferenceConfig, PyTorchConfig
from optimum_benchmark.logging_utils import setup_logging
setup_logging(level="INFO", handlers=["console"])
if __name__ == "__main__":
launcher_config = TorchrunConfig(nproc_per_node=2)
scenario_config = InferenceConfig(latency=True, memory=True)
backend_config = PyTorchConfig(model="gpt2", device="cuda", device_ids="0,1", no_weights=True)
benchmark_config = BenchmarkConfig(
name="pytorch_gpt2",
scenario=scenario_config,
launcher=launcher_config,
backend=backend_config,
)
benchmark_report = Benchmark.launch(benchmark_config)
# log the benchmark in terminal
benchmark_report.log() # or print(benchmark_report)
# convert artifacts to a dictionary or dataframe
benchmark_config.to_dict() # or benchmark_config.to_dataframe()
# save artifacts to disk as json or csv files
benchmark_report.save_csv("benchmark_report.csv") # or benchmark_report.save_json("benchmark_report.json")
# push artifacts to the hub
benchmark_config.push_to_hub("IlyasMoutawwakil/pytorch_gpt2") # or benchmark_config.push_to_hub("IlyasMoutawwakil/pytorch_gpt2")
# or merge them into a single artifact
benchmark = Benchmark(config=benchmark_config, report=benchmark_report)
benchmark.save_json("benchmark.json") # or benchmark.save_csv("benchmark.csv")
benchmark.push_to_hub("IlyasMoutawwakil/pytorch_gpt2")
# load artifacts from the hub
benchmark = Benchmark.from_hub("IlyasMoutawwakil/pytorch_gpt2") # or Benchmark.from_hub("IlyasMoutawwakil/pytorch_gpt2")
# or load them from disk
benchmark = Benchmark.load_json("benchmark.json") # or Benchmark.load_csv("benchmark_report.csv")
If you're on VSCode, you can hover over the configuration classes to see the available parameters and their descriptions. You can also see the available parameters in the Features section below.
You can also run a benchmark using the command line by specifying the configuration directory and the configuration name. Both arguments are mandatory for hydra
. --config-dir
is the directory where the configuration files are stored and --config-name
is the name of the configuration file without its .yaml
extension.
optimum-benchmark --config-dir examples/ --config-name pytorch_bert
This will run the benchmark using the configuration in examples/pytorch_bert.yaml
and store the results in runs/pytorch_bert
.
The resulting files are :
benchmark_config.json
which contains the configuration used for the benchmark, including the backend, launcher, scenario and the environment in which the benchmark was run.benchmark_report.json
which contains a full report of the benchmark's results, like latency measurements, memory usage, energy consumption, etc.benchmark.json
contains both the report and the configuration in a single file.benchmark.log
contains the logs of the benchmark run.
Advanced CLI options
It's easy to override the default behavior of a benchmark from the command line of an already existing configuration file. For example, to run the same benchmark on a different device, you can use the following command:
optimum-benchmark --config-dir examples/ --config-name pytorch_bert backend.model=gpt2 backend.device=cuda
You can easily run configuration sweeps using the --multirun
option. By default, configurations will be executed serially but other kinds of executions are supported with hydra's launcher plugins (e.g. hydra/launcher=joblib
).
optimum-benchmark --config-dir examples --config-name pytorch_bert -m backend.device=cpu,cuda
You can create custom and more complex configuration files following these examples. They are heavily commented to help you understand the structure of the configuration files.
optimum-benchmark
allows you to run benchmarks with minimal configuration. A benchmark is defined by three main components:
- The launcher to use (e.g.
process
) - The scenario to follow (e.g.
training
) - The backend to run on (e.g.
onnxruntime
)
- Process launcher (
launcher=process
); Launches the benchmark in an isolated process. - Torchrun launcher (
launcher=torchrun
); Launches the benchmark in multiples processes usingtorch.distributed
. - Inline launcher (
launcher=inline
), not recommended for benchmarking, only for debugging purposes.
General Launcher features π§°
- Assert GPU devices (NVIDIA & AMD) isolation (
launcher.device_isolation=true
). This feature makes sure no other processes are running on the targeted GPU devices other than the benchmark. Espepecially useful when running benchmarks on shared resources.
- Training scenario (
scenario=training
) which benchmarks the model using the trainer class with a randomly generated dataset. - Inference scenario (
scenario=inference
) which benchmakrs the model's inference method (forward/call/generate) with randomly generated inputs.
Inference scenario features π§°
- Memory tracking (
scenario.memory=true
) - Energy and efficiency tracking (
scenario.energy=true
) - Latency and throughput tracking (
scenario.latency=true
) - Warm up runs before inference (
scenario.warmup_runs=20
) - Inputs shapes control (e.g.
scenario.input_shapes.sequence_length=128
) - Forward, Call and Generate kwargs (e.g. for an LLM
scenario.generate_kwargs.max_new_tokens=100
, for a diffusion modelscenario.call_kwargs.num_images_per_prompt=4
)
See InferenceConfig for more information.
Training scenario features π§°
- Memory tracking (
scenario.memory=true
) - Energy and efficiency tracking (
scenario.energy=true
) - Latency and throughput tracking (
scenario.latency=true
) - Warm up steps before training (
scenario.warmup_steps=20
) - Dataset shapes control (e.g.
scenario.dataset_shapes.sequence_length=128
) - Training arguments control (e.g.
scenario.training_args.per_device_train_batch_size=4
)
See TrainingConfig for more information.
- Pytorch backend for CPU (
backend=pytorch
,backend.device=cpu
) - Pytorch backend for CUDA (
backend=pytorch
,backend.device=cuda
,backend.device_ids=0,1
) - Pytorch backend for Habana Gaudi Processor (
backend=pytorch
,backend.device=hpu
,backend.device_ids=0,1
) - OnnxRuntime backend for CPUExecutionProvider (
backend=onnxruntime
,backend.device=cpu
) - OnnxRuntime backend for CUDAExecutionProvider (
backend=onnxruntime
,backend.device=cuda
) - OnnxRuntime backend for ROCMExecutionProvider (
backend=onnxruntime
,backend.device=cuda
,backend.provider=ROCMExecutionProvider
) - OnnxRuntime backend for TensorrtExecutionProvider (
backend=onnxruntime
,backend.device=cuda
,backend.provider=TensorrtExecutionProvider
) - Py-TXI backend for CPU and GPU (
backend=py-txi
,backend.device=cpu
orbackend.device=cuda
) - Neural Compressor backend for CPU (
backend=neural-compressor
,backend.device=cpu
) - TensorRT-LLM backend for CUDA (
backend=tensorrt-llm
,backend.device=cuda
) - Torch-ORT backend for CUDA (
backend=torch-ort
,backend.device=cuda
) - OpenVINO backend for CPU (
backend=openvino
,backend.device=cpu
) - OpenVINO backend for GPU (
backend=openvino
,backend.device=gpu
) - vLLM backend for CUDA (
backend=vllm
,backend.device=cuda
) - vLLM backend for ROCM (
backend=vllm
,backend.device=rocm
) - vLLM backend for CPU (
backend=vllm
,backend.device=cpu
) - IPEX backend for CPU (
backend=ipex
,backend.device=cpu
) - IPEX backend for XPU (
backend=ipex
,backend.device=xpu
)
General backend features π§°
- Device selection (
backend.device=cuda
), can becpu
,cuda
,mps
, etc. - Device ids selection (
backend.device_ids=0,1
), can be a list of device ids to run the benchmark on multiple devices. - Model selection (
backend.model=gpt2
), can be a model id from the HuggingFace model hub or an absolute path to a model folder. - "No weights" feature, to benchmark models without downloading their weights, using randomly initialized weights (
backend.no_weights=true
)
Backend specific features π§°
For more information on the features of each backend, you can check their respective configuration files:
Contributions are welcome! And we're happy to help you get started. Feel free to open an issue or a pull request. Things that we'd like to see:
- More backends (Tensorflow, TFLite, Jax, etc).
- More tests (for optimizations and quantization schemes).
- More hardware support (Habana Gaudi Processor (HPU), Apple M series, etc).
- Task evaluators for the most common tasks (would be great for output regression).
To get started, you can check the CONTRIBUTING.md file.