Skip to content

A simple char-rnn LSTM model to compose expressive music melody implemented in Tensorflow

Notifications You must be signed in to change notification settings

daQuincy/Simple-Music-char-rnn

Repository files navigation

Simple Music char-rnn

Accompanying code for my blog post "Solving Grade 8 Music Theory question with deep learning — Completing a melody with Tensorflow".

Listen to samples generated by the model here https://soundcloud.com/quan-lim-2/sets/simple-lstm

Dependencies

Usage

Clone repo

$ git clone [email protected]:daQuincy/Simple-Music-char-rnn.git
$ cd Simple-Music-char-rnn

Docker [OPTIONAL]

$ docker build -f Dockerfile --tag music:1.0 .

Once image is done building, start a container and mount the cloned folder.

docker run -it  --gpus all --ipc=host --device /dev/nvidia0 --device /dev/nvidia-uvm --device /dev/nvidia-uvm-tools --device /dev/nvidiactl -v ${PWD}:/home/ubuntu/music music:1.0 bash

Ignore --gpus, --ipc and --device tags if running on CPU.

Download and prepare dataset

$ python scrape.py
$ python data.py

Edit config

Training configurations can be edited at config.py

Train

$ python train.py

Generate new music

Example:

$ python inference.py \
    --prime_midi prime_midi.mid  # input midi file
    --output_file output.mid  # output midi file
    --checkpoint experiments/experiment_1/yamaha  # training checkpoint
    --temperature 0.8 [OPTIONAL]  # model temperature
    --length 120 [OPTIONAL]  # number of notes of generated music

About

A simple char-rnn LSTM model to compose expressive music melody implemented in Tensorflow

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published