Skip to content

Latest commit

 

History

History
96 lines (74 loc) · 6.03 KB

README.md

File metadata and controls

96 lines (74 loc) · 6.03 KB

Making VITS efficient (no longer improving, but feel free to pr if you want)

Goals

  • Try to implement LoRA Finetuning on VITS by modifying attentions.py as described in the LoRA Paper
    • Doesnt work as the generator doesnt get updated, need to research more
  • Try to implement 8-bit training using bitsandbytes to reduce vram usage
    • 8 bit optimisers

Plan

  • Extract discriminator from hifigan
  • Modify models.py to match hifigan implementation
  • Make sure training loop runs
  • Clean up logging, maybe integrate WandB to track training progress
    • WandB integrated! Run train_wandb.py to use it
  • Test finetune as-is on test dataset to make sure patched discriminator works
  • implement LoRA
  • Test finetune on lora using same test dataset
  • Implement 8-bit optimizers
  • Create webpage to show results (see w&b report)

Finetuning and Pretrained Models

  • The generator for LJSpeech is here
  • The discriminator(extracted from hifi-gan) is here
  • A notebook with a running training loop is here
  • W&B Report on finetuning with 8bit optimisers

8 bit AdamW

VITS at training with 8bit AdamW

By using 8bit AdamW for the loss. The training now uses approx 8.5gb of vram, wheras before it was using 12gb!!

Ethical Concerns

  • I realise that if this works, anyone remotely knowledgeable about machine learning will be able to finetune VITS on large datasets quite quickly to achieve pretty good voice cloning. However, this is already a thing(tortoise-tts: mrq fork, DLAS fork), but it takes a long time to finetune and generate.
  • However, sparks of efficient finetuning for TTS systems are already here, its only a matter of time before someone like me will do it for other models.
  • My initial plan is to only provide a comparison of results, and any good and ethical finetunes. I believe the real value lies in the dataset creation so will not be opensourcing any scripts to clean and curate the data.

Readme from Original Repo

VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech

Jaehyeon Kim, Jungil Kong, and Juhee Son

In our recent paper, we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.

Several recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.

Visit our demo for audio samples.

We also provide the pretrained models.

** Update note: Thanks to Rishikesh (ऋषिकेश), our interactive TTS demo is now available on Colab Notebook.

VITS at training VITS at inference
VITS at training VITS at inference

Pre-requisites

  1. Python >= 3.6
  2. Clone this repository
  3. Install python requirements. Please refer requirements.txt
    1. You may need to install espeak first: apt-get install espeak-ng
  4. Download datasets
    1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: ln -s /path/to/LJSpeech-1.1/wavs DUMMY1
    2. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2
  5. Build Monotonic Alignment Search and run preprocessing if you use your own datasets.
# Cython-version Monotonoic Alignment Search
cd monotonic_align
mkdir monotonic_align
python setup.py build_ext --inplace

# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.
# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt 
# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt

Training Exmaple

# LJ Speech
python train.py -c configs/ljs_base.json -m ljs_base

# VCTK
python train_ms.py -c configs/vctk_base.json -m vctk_base

Inference Example

See inference.ipynb