Skip to content

This is a modification of the CQT-Diff+ code to make it conditional while preserving its unconditional features (inpainting and audio generation). At the moment, it is NOT ready for use; this modification is in an alpha state. The license for this modification is MIT, just like the original BABE-2 project

License

Notifications You must be signed in to change notification settings

emmanuelinfante/CQT-Diff-Plus-Conditional

 
 

Repository files navigation

A Diffusion-Based Generative Equalizer for Music Restoration

This repository aims to create a conditional version of the CQT-Diff+ model while retaining its unconditional features (inpainting and audio generation). At the moment, it is NOT ready for use; this modification is in an Alpha state. The license for this modification is the same as that of the original BABE-2 project.

E. Moliner, M. Turunen, F. Elvander and V. Välimäki,, "A Diffusion-Based Generative Equalizer for Music Restoration", submitted to DAFx24, Mar, 2022

News 🚨

  • 21/06/2024 | I will resume the project and seek to optimize the model's architecture to the maximum so that it can be trained on less resource-intensive equipment.

alt text

Abstract

This paper presents a novel approach to audio restoration, focusing on the enhancement of low-quality music recordings, and in particular historical ones. Building upon a previous algorithm called BABE, or Blind Audio Bandwidth Extension, we introduce BABE-2, which presents a series of significant improvements. This research broadens the concept of bandwidth extension to \emph{generative equalization}, a novel task that, to the best of our knowledge, has not been explicitly addressed in previous studies. BABE-2 is built around an optimization algorithm utilizing priors from diffusion models, which are trained or fine-tuned using a curated set of high-quality music tracks. The algorithm simultaneously performs two critical tasks: estimation of the filter degradation magnitude response and hallucination of the restored audio. The proposed method is objectively evaluated on historical piano recordings, showing a marked enhancement over the prior version. The method yields similarly impressive results in rejuvenating the works of renowned vocalists Enrico Caruso and Nellie Melba. This research represents an advancement in the practical restoration of historical music.

Listen to our audio samples

Read the pre-print in arXiv

Restore a recording with a pretrained model

The pretrained checkpoints used in the paper experiments are available here

python test.py  --config-name=conf_singing_voice.yaml tester=singer_evaluator_BABE2 tester.checkpoint="path/to/checkpoint.pt" id="BABE2_restored" tester.evaluation.single_recording="path/to/recording.wav"

Test unconditional sampling

python test.py  --config-name=conf_piano.yaml tester=only_uncond_maestro tester.checkpoint="path/to/checkpoint.pt" id="BABE2" tester.modes=["unconditional"]

Train or fine-tune your own conditional diffusion model

Note: If you intend to train this version of the model, you'll need to modify the "conf_custom.yaml" file. Change the paths "/path/to/clean/dataset" and "/path/to/noisy/dataset" to the correct paths of your clean and noisy datasets that will be used for training.

Train a model from scratch (conditional noisy and clean):

python train.py --config-name=conf_custom.yaml model_dir="experiments/model_dir" exp.batch=$batch_size"

Fine-tune from pre-trained model:

python train.py --config-name=conf_custom.yaml model_dir="experiments/finetuned_model_dir" exp.batch=$batch_size dset.path="/path/to/dataset" exp.finetuning=True exp.base_checkpoint="/path/to/pretrained/checkpoint.pt" 

About

This is a modification of the CQT-Diff+ code to make it conditional while preserving its unconditional features (inpainting and audio generation). At the moment, it is NOT ready for use; this modification is in an alpha state. The license for this modification is MIT, just like the original BABE-2 project

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.4%
  • Shell 0.6%