Run F5-TTS using ONNX Runtime for efficient and flexible text-to-speech processing.
- 2025/3/16: It currently support the latest SWivid/F5-TTS - v1. Please
pip install f5-tts --upgrade
first. - 2025/3/05: The issue of silence output when using float16 has now been resolved. Please set
use_fp16_transformer = True # (Export_F5.py, Line: 21)
before export. - 2025/3/01: endink Add a Windows one-key export script to facilitate the use of Windows integration users. The script will automatically install dependencies. Usage:
conda create -n f5_tts_export python=3.10 -y conda activate f5_tts_export git clone https://github.com/DakeQQ/F5-TTS-ONNX.git cd F5-TTS-ONNX .\export_windows.bat
-
Windows OS + Intel/AMD/Nvidia GPU:
- Easy solution using ONNX-DirectML for GPUs on Windows.
- Install ONNX Runtime DirectML:
pip install onnxruntime-directml --upgrade
-
CPU Only:
- For users with 'CPU only' setups, including Intel or AMD, you can try using
['OpenVINOExecutionProvider']
and addingprovider_options
for a slight performance boost of around 5~20%. -
provider_options = [{ 'device_type' : 'CPU', 'precision' : 'ACCURACY', 'num_of_threads': MAX_THREADS, 'num_streams': 1, 'enable_opencl_throttling' : True, 'enable_qdq_optimizer': False }]
- Remember
pip uninstall onnxruntime-gpu
andpip uninstall onnxruntime-directml
first. Nextpip install onnxruntime-openvino --upgrade
.
- For users with 'CPU only' setups, including Intel or AMD, you can try using
-
Intel OpenVINO:
- If you are using a recent Intel chip, you can try
['OpenVINOExecutionProvider']
with provider_options'device_type': 'XXX'
, whereXXX
can be one of the following options: (No guarantee that it will work or function well)CPU
GPU
NPU
AUTO:NPU,CPU
AUTO:NPU,GPU
AUTO:GPU,CPU
AUTO:NPU,GPU,CPU
HETERO:NPU,CPU
HETERO:NPU,GPU
HETERO:GPU,CPU
HETERO:NPU,GPU,CPU
- Remember
pip uninstall onnxruntime-gpu
andpip uninstall onnxruntime-directml
first. Nextpip install onnxruntime-openvino --upgrade
.
- If you are using a recent Intel chip, you can try
-
Simple GUI Version:
- Try the easy-to-use GUI version:
-
NVIDIA TensorRT Support:
- For NVIDIA GPU optimization with TensorRT, visit:
-
Download
- Explore more related projects and resources:
Project Overview
通过 ONNX Runtime 运行 F5-TTS,实现高效灵活的文本转语音处理。
- 2025/3/16: 支持最新的 SWivid/F5-TTS - v1,请先
pip install f5-tts --upgrade
。 - 2025/3/05 使用 float16 时出现的静音输出问题现已解决。在导出之前,请设置
use_fp16_transformer = True # (Export_F5.py,第 21 行)
。 - 2025/3/01: endink 添加一个 windows 一键导出脚本,方便广大 windows 集成用户使用,脚本会自动安装依赖。使用方法:
conda create -n f5_tts_export python=3.10 -y conda activate f5_tts_export git clone https://github.com/DakeQQ/F5-TTS-ONNX.git cd F5-TTS-ONNX .\export_windows.bat
-
Windows 操作系统 + Intel/AMD/Nvidia GPU:
- 针对 GPU 的简单解决方案,通过 ONNX-DirectML 在 Windows 上运行。
- 安装 ONNX Runtime DirectML:
pip install onnxruntime-directml --upgrade
-
仅CPU:
- 对于仅使用CPU的用户(包括Intel或AMD),可以尝试使用
['OpenVINOExecutionProvider']
并添加provider_options
,以获得大约5~20%的性能提升。 - 示例代码:
provider_options = [{ 'device_type': 'CPU', 'precision': 'ACCURACY', 'num_of_threads': MAX_THREADS, 'num_streams': 1, 'enable_opencl_throttling': True, 'enable_qdq_optimizer': False }]
- 请记得先执行
pip uninstall onnxruntime-gpu
andpip uninstall onnxruntime-directml
。 接下来pip install onnxruntime-openvino --upgrade
。
- 对于仅使用CPU的用户(包括Intel或AMD),可以尝试使用
-
Intel OpenVINO:
- 如果您使用的是近期的Intel芯片,可以尝试
['OpenVINOExecutionProvider']
,并设置provider_options
中的'device_type': 'XXX'
,其中XXX
可以是以下选项之一: (不能保证其能够正常运行或运行良好)CPU
GPU
NPU
AUTO:NPU,CPU
AUTO:NPU,GPU
AUTO:GPU,CPU
AUTO:NPU,GPU,CPU
HETERO:NPU,CPU
HETERO:NPU,GPU
HETERO:GPU,CPU
HETERO:NPU,GPU,CPU
- 请记得先执行
pip uninstall onnxruntime-gpu
andpip uninstall onnxruntime-directml
。 接下来pip install onnxruntime-openvino --upgrade
。
- 如果您使用的是近期的Intel芯片,可以尝试
-
简单的图形界面版本:
- 体验简单易用的图形界面版本:
-
支持 NVIDIA TensorRT:
- 针对 NVIDIA GPU 的 TensorRT 优化,请访问:
-
Download
- 探索更多相关项目和资源:
项目概览