Skip to content

1.3.8.1 - Fix model checking

Compare
Choose a tag to compare
@Dadangdut33 Dadangdut33 released this 31 Dec 10:47
· 8 commits to master since this release

This release fix model checking bug in the previous version

What's Changed

  • Fix faster whisper model checking #63

Full Changelog: 1.3.8...1.3.8.1

Notes

  • Before downloading / installing please take a look at the wiki and read the getting started section.
  • Use the CUDA version for GPU support
  • Linux/Mac user can follow this installation note to install Speech Translate as module.
  • If you previously installed speech translate as a module, you can update by doing pip install -U git+https://github.com/Dadangdut33/Speech-Translate.git --upgrade --force-reinstall --no-deps
  • If you install from installer, you can download and launch the installer below to update
  • If you have any suggestions or found any bugs please feel free to open a disccussion or open an issue

Requirements

  • Compatible OS Installation:
OS Installation from Prebuilt binary Installation as a Module Installation from Git
Windows ✔️ ✔️ ✔️
MacOS ✔️ ✔️
Linux ✔️ ✔️

* Python 3.8 or later (3.11 is recommended) for installation as module.

Size Parameters Required VRAM Relative speed
tiny 39 M ~1 GB ~32x
base 74 M ~1 GB ~16x
small 244 M ~2 GB ~6x
medium 769 M ~5 GB ~2x
large 1550 M ~10 GB 1x

* This information is also available in the app (hover over the model selection in the app and there will be a tooltip about the model info). Also note that when using faster-whisper, the model speed will be significantly faster and have smaller vram usage, for more information about this please visit faster-whisper repository