This repository contains the NNabla implementation of CrossNet-Open-Unmix (X-UMX), an improved version of Open-Unmix (UMX) for music source separation. X-UMX achieves an improved performance without additional learnable parameters compared to the original UMX model. Details of X-UMX can be found in our paper.
From the Colab link below, you can try using X-UMX to generate and listen to separated audio sources of your audio music file. Please give it a try!
Related Projects: x-umx | open-unmix-nnabla | open-unmix-pytorch | musdb | museval | norbert | sigsep-mus-io
As shown in Figure (b), X-UMX has almost the same architecture as the original UMX, but only differs by two additional average operations that link the instrument models together. Since these operations are not DNN layers, the number of learnable parameters of X-UMX is the same as for the original UMX and also the computational complexity is almost the same. Besides the model, there are two more differences compared to the original UMX. In particular, Multi Domain Loss (MDL) and a Combination Loss (CL) are used during training, which are different from the original loss function of UMX. Hence, these three contributions, i.e., (i) Crossing architecture, (ii) MDL and (iii) CL, make the original UMX more effective and successful without additional learnable parameters.
- nnabla >= v1.24.0
- musdb
- norbert
- resampy
- ffmpeg
For installation we recommend to use the Anaconda python distribution. To create a conda environment for open-unmix, simply run:
conda env create -f environment-X.yml
where X
is either [cpu
, gpu
], depending on your system.
Download here a pre-trained model of X-UMX which results in the scores given in our paper. The model was trained on the MUSDB18 dataset.
In order to use it, please use the following command:
python test.py --inputs [Input mixture (any audio format supported by FFMPEG)] --model {path to downloaded x-umx.h5 weights file} --context cudnn --chunk-dur 10 --out-dir ./results/
Please note that our X-UMX integrates the different instrument networks of the original UMX by a crossing operation, and thus X-UMX requires more memory. So, it maybe difficult to run the model on smaller GPU. So, we suggest users to set --chunk-dur
with values appropriate for each computer. It is used to break audio into smaller chunks, separate sources and stitch them back together. Thus it reduces the required GPU memory. If your inference still crashes, kindly reduce chunk duration and try again.
To perform evaluation in comparison to other SiSEC systems, you would need to install the museval
package using
pip install museval
and then run the evaluation using
python eval.py --model [path to downloaded model file (./x-umx.h5)] --root [Path to MUSDB18 dataset] --out-dir [Path to save musdb estimates and museval results]
X-UMX can be trained using the default parameters of the train.py function.
The MUSDB18 is one of the largest freely available datasets for professionally produced music tracks (~10h duration) of different styles. It comes with isolated drums
, bass
, vocals
and others
stems. MUSDB18 contains two subsets: "train", composed of 100 songs, and "test", composed of 50 songs.
To directly train x-umx, we first would need to download the dataset and place in unzipped in a directory of your choice (called root
).
Argument | Description | Default |
---|---|---|
--root <str> |
path to root of dataset on disk. | None |
Also note that, if --root
is not specified, we automatically download a 7 second preview version of the MUSDB18 dataset. While this is comfortable for testing purposes, we wouldn't recommend to actually train your model on this.
All files from the MUSDB18 dataset are encoded in the Native Instruments stems format (.mp4). If you want to use WAV files (e.g. for faster audio decoding), musdb
also supports parsing and processing pre-decoded PCM/wav files. Downloaded STEMS dataset (.mp4) can be decoded into WAV version either by docker based solution or running scripts manually as shown here.
When you use the decoded MUSDB18 dataset (WAV version), use the --is-wav
argument while running train.py.
python train.py --root [Path of MUSDB18] --output [Path to save weights]
python train.py --root [Path of MUSDB18] --output [Path to save weights] --is-wav
For distributed training install NNabla package compatible with Multi-GPU execution. Use the below code to start the distributed training.
export CUDA_VISIBLE_DEVICES=0,1,2,3 {device ids that you want to use}
mpirun -n {no. of devices} python train.py --root [Path of MUSDB18] --output [Path to save weights]
mpirun -n {no. of devices} python train.py --root [Path of MUSDB18] --output [Path to save weights] --is-wav
Please note that above sample training scripts will work on high quality 'STEM' or low quality 'MP4 files'. In case you would like faster data loading, kindly look at more details here to generate decoded 'WAV' files. In that case, please use --is-wav
flag for training.
Training MUSDB18
using x-umx comes with several design decisions that we made as part of our defaults to improve efficiency and performance:
- chunking: we do not feed full audio tracks into x-umx but instead chunk the audio into 6s excerpts (
--seq-dur 6.0
). - balanced track sampling: to not create a bias for longer audio tracks we randomly yield one track from MUSDB18 and select a random chunk subsequently. In one epoch we select (on average) 64 samples from each track.
- source augmentation: we apply random gains between
0.25
and1.25
to all sources before mixing. Furthermore, we randomly swap the channels the input mixture. - random track mixing: for a given target we select a random track with replacement. To yield a mixture we draw the interfering sources from different tracks (again with replacement) to increase generalization of the model.
- fixed validation split: we provide a fixed validation split of 14 tracks. We evaluate on these tracks in full length instead of using chunking to have evaluation as close as possible to the actual test data.
Some of the parameters for the MUSDB sampling can be controlled using the following arguments:
Argument | Description | Default |
---|---|---|
--is-wav |
loads the decoded WAVs instead of STEMS for faster data loading. See more details here. | False |
--samples-per-track <int> |
sets the number of samples that are randomly drawn from each track | 64 |
--source-augmentations <list[str]> |
applies augmentations to each audio source before mixing | gain, channelswap |
An extensive list of additional training parameters allows researchers to quickly try out different parameterizations such as a different FFT size. The table below, we list the additional training parameters and their default values:
Argument | Description | Default |
---|---|---|
--output <str> |
path where to save the trained output model as well as checkpoints. | ./x-umx |
--epochs <int> |
Number of epochs to train | 1000 |
--batch-size <int> |
Batch size has influence on memory usage and performance of the LSTM layer | 16 |
--seq-dur <int> |
Sequence duration in seconds of chunks taken from the dataset. A value of <=0.0 results in full/variable length |
6.0 |
--hidden-size <int> |
Hidden size parameter of dense bottleneck layers | 512 |
--nfft <int> |
STFT FFT window length in samples | 4096 |
--nhop <int> |
STFT hop length in samples | 1024 |
--lr <float> |
learning rate | 0.001 |
--lr-decay-patience <int> |
learning rate decay patience for plateau scheduler | 80 |
--lr-decay-gamma <float> |
gamma of learning rate plateau scheduler. | 0.3 |
--weight-decay <float> |
weight decay for regularization | 0.00001 |
--bandwidth <int> |
maximum bandwidth in Hertz processed by the LSTM. Input and Output is always full bandwidth! | 16000 |
--nb-channels <int> |
set number of channels for model (1 for mono (spectral downmix is applied,) 2 for stereo) | 2 |
--context <str> |
Extension modules. ex) 'cpu', 'cudnn'. | 'cudnn' |
--seed <int> |
Initial seed to set the random initialization | 42 |
--valid_dur <float> |
To prevent GPU memory overflow, validation is calculated and averaged per valid_dur seconds. |
100.0 |
Ryosuke Sawata(*), Stefan Uhlich(**), Shusuke Takahashi(*) and Yuki Mitsufuji(*)
(*) Sony Corporation, Tokyo, Japan
(**)Sony Europe B.V., Stuttgart, Germany
If you use CrossNet-open-unmix for your research – Cite CrossNet-Open-Unmix
@article{sawata20,
title={All for One and One for All: Improving Music Separation by Bridging Networks},
author={Ryosuke Sawata and Stefan Uhlich and Shusuke Takahashi and Yuki Mitsufuji},
year={2020},
eprint={2010.04228},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
If you use open-unmix for your research – Cite Open-Unmix
@article{stoter19,
author={F.-R. St\\"oter and S. Uhlich and A. Liutkus and Y. Mitsufuji},
title={Open-Unmix - A Reference Implementation for Music Source Separation},
journal={Journal of Open Source Software},
year=2019,
doi = {10.21105/joss.01667},
url = {https://doi.org/10.21105/joss.01667}
}
If you use the MUSDB dataset for your research - Cite the MUSDB18 Dataset
@misc{MUSDB18,
author = {Rafii, Zafar and
Liutkus, Antoine and
Fabian-Robert St{\"o}ter and
Mimilakis, Stylianos Ioannis and
Bittner, Rachel},
title = {The {MUSDB18} corpus for music separation},
month = dec,
year = 2017,
doi = {10.5281/zenodo.1117372},
url = {https://doi.org/10.5281/zenodo.1117372}
}
If you compare your results with SiSEC 2018 Participants - Cite the SiSEC 2018 LVA/ICA Paper
@inproceedings{SiSEC18,
author="St{\"o}ter, Fabian-Robert and Liutkus, Antoine and Ito, Nobutaka",
title="The 2018 Signal Separation Evaluation Campaign",
booktitle="Latent Variable Analysis and Signal Separation:
14th International Conference, LVA/ICA 2018, Surrey, UK",
year="2018",
pages="293--305"
}