Releases: BBC-Esq/ctranslate2-faster-whisper-transcriber
v3.0.1 - perfected
v2.4
v2.3.1 - bug fix
Temporary change in the command to install faster-whisper due to a mistake in uploading to pypi.org.
v2.3 - update faster-whisper
Update to faster-whisper 0.10.0.
v2.2 - transcriber!
Bump to 2.2 due to one additional commit. Otherwise, same as version 2.1.
v2.1 - more options and creature comforts
Broke out into more scripts for manageability.
Now checks of a CUDA device is present and only show it as an option if so.
Automatically detects the supported quantizations based on your CPU and CUDA device (if present), greatly simplifying user experience.
Added usage of YAML config file to settings.
v2.0 - Houston, we have options!
Added the ability to choose the model size, quantization, and compute device on the fly. Beware, no guardrails yet. So if you choose "cuda" and you don't have an Nvidia GPU I can't guarantee how it'll behave.
To check which quantizations your cpu and gpu support, use my other tool HERE.
Create an issue if you would like see or work on additional features or find a bug.
v1.1 - updates + .exe
Updated the default settings within main.py
and added detailed instructions. I've also quantized any and all whisper models myself and put them on huggingface.co, which the script now utilizes.
Read instructions within main.py
first before changing the recommended size and/or quantization level.
Also added TWO .exe, no installation necessary.
- CUDA version uses small.en model
- The other uses base.en, cpu-only, 4 threads.
v1.0 - new release!
Update README.md