Skip to content

Releases: BBC-Esq/ctranslate2-faster-whisper-transcriber

v3.0.1 - perfected

01 Jul 20:37
14d7d9b
Compare
Choose a tag to compare

Using sounddevice instead of pyaudio. Upgraded faster-whisper version.

CUDA is no longer required to be installed. This program will automatically download the appropriate .dll files to use.

v2.4

22 Feb 23:23
76bb97c
Compare
Choose a tag to compare
remove pyperclip dependency

v2.3.1 - bug fix

22 Feb 02:43
76a1738
Compare
Choose a tag to compare

Temporary change in the command to install faster-whisper due to a mistake in uploading to pypi.org.

v2.3 - update faster-whisper

05 Feb 01:28
12e1e10
Compare
Choose a tag to compare

Update to faster-whisper 0.10.0.

v2.2 - transcriber!

17 Jan 04:26
0ab1373
Compare
Choose a tag to compare

Bump to 2.2 due to one additional commit. Otherwise, same as version 2.1.

v2.1 - more options and creature comforts

27 Oct 16:41
314e8e4
Compare
Choose a tag to compare

Broke out into more scripts for manageability.

Now checks of a CUDA device is present and only show it as an option if so.

Automatically detects the supported quantizations based on your CPU and CUDA device (if present), greatly simplifying user experience.

Added usage of YAML config file to settings.

v2.0 - Houston, we have options!

26 Oct 01:12
b2b419a
Compare
Choose a tag to compare

Added the ability to choose the model size, quantization, and compute device on the fly. Beware, no guardrails yet. So if you choose "cuda" and you don't have an Nvidia GPU I can't guarantee how it'll behave.

To check which quantizations your cpu and gpu support, use my other tool HERE.

Create an issue if you would like see or work on additional features or find a bug.

v1.1 - updates + .exe

14 Oct 15:15
1c4b22a
Compare
Choose a tag to compare

Updated the default settings within main.py and added detailed instructions. I've also quantized any and all whisper models myself and put them on huggingface.co, which the script now utilizes.

Read instructions within main.py first before changing the recommended size and/or quantization level.

Also added TWO .exe, no installation necessary.

  • CUDA version uses small.en model
  • The other uses base.en, cpu-only, 4 threads.

v1.0 - new release!

02 Oct 18:51
fa8c5e4
Compare
Choose a tag to compare
Update README.md