-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expected results? #4
Comments
I have also noticed low performance on MAPS. But the performance in the MAESTRO paper to compare fairly is the second row in Table 6, the one tested on MAPS without data augmentation: 0.82/0.83/0.61. I'm suspecting that this is partly related to #3 , although I haven't had bandwidth to verify that. I'll be able to get back on this this month before ISMIR starts. |
Thanks! As far as I understand, all of their experiments are trained on MAESTRO, and Table 6 (row 1-2) shows how the model generalizes to the MAPS dataset - I wasn't able to reproduce the same level of generalizability with my implementation. When trained & tested on MAESTRO, this implementation can achieve similar performance to the row 4 of table 6. |
I have also trained the model for 100k, but I got the following result. Should I predict on earlier iteration checkpoints? Or maybe something went wrong?
|
@hanjuTsai see #1 |
Thanks @brianc118 !
should download all required dependencies. |
Hi @jongwook, I have been trying to train on MAPS since yesterday, however, I am still facing the Userwarning shown above. Additionally, the metrics.json file in my case is shown to be empty. Could you kindly point out what I might be doing wrong? The command I am using for running is:
I checked it till after 1100 iterations, but there was no change. |
@justachetan Is your loss decreasing? You'll need ~100,000 iterations (ideally more) to see sensible results |
I am not able to even see the loss. The metrics.json file generates is completely empty. As per my understanding, on running the above command, the metrics should get logged after every 100 iterations, right? I saw in another issue that they were able to see some results after 500 iterations, as the Userwarning about empty reference frames went away. Hence, I was confused as to why this is happening. |
|
Yep, I have not made any changes to your code as of now. I assumed that it was getting generated from there. |
Your loss plot does not seem to have as many fluctuations as mine. Is this while training on MAPS only? |
FYI your plot contains lines from multiple tensorboard log files, hence looking messy. Also note that my plots are smoothed significantly; the dim curves in the background are the actual data points. If you train until ~100k it'll look similar. |
So even in your case, while training on MAPS, the Accuracy/Recall plots for notes and frames are not available till 100k iterations? |
I don't have the numbers for MAPS at hand, but it'll be generally similar. See Figure 6 of https://arxiv.org/pdf/1906.08512.pdf ; the blue baseline curve is for the MAESTRO dataset. |
The plot seems to indicate that you were not getting any values on Frame F1 or Note F1 till about 100k iterations. Probably due to the |
You'll get sensible frame/note F1 values after around 100k, as said earlier. |
Hi @jongwook, I have read your paper ADVERSARIAL LEARNING FOR IMPROVED ONSETS AND FRAMES MUSIC TRANSCRIPTION. In your paper, you reported good experiment results in both onsets and frames and your proposed method. |
I get the following when evaluating on MAPS after training the model over 100k iterations.
These metrics appear to be quite low, especially the frame metrics which are 0.65/0.65/0.64 whereas the Maestro paper reports 0.90/0.95/0.81.
Is this expected?
Thanks!
The text was updated successfully, but these errors were encountered: