This is a Python project that uses the OpenAI open source Whisper library to transcribe audio files and detect the language of the transcribed text.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
If you have the Remote - Containers extension installed in Visual Studio Code, you can open this project in a development container. This will automatically install all the necessary dependencies and set up the environment for you in an isolated Docker container.
Follow these steps:
- Clone the repository.
- Open the project in Visual Studio Code.
- When prompted to "Reopen in Container", select "Reopen in Container". If you're not prompted, you can press
F1
to open the command palette, then select "Remote-Containers: Reopen Folder in Container".
The first time you open the container, it may take a few minutes to build. Once the container is built, the terminal will connect to the running container.
To run the script, use the terminal in Visual Studio Code:
python app.py
Run the app.py script to transcribe the audio files in the audio directory:
python app.py
The transcribed text will be saved in the transcriptions directory.
If you have an audio file with multiple speakers, you can perform speaker diarization by adding your Hugging Face authentication token in your .env
file
HF_AUTH_TOKEN=