-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FSD50K Speech Model Fine-tuning Tutorial #201
FSD50K Speech Model Fine-tuning Tutorial #201
Conversation
for more information, see https://pre-commit.ci
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #201 +/- ##
===================================
Coverage 70% 70%
===================================
Files 2 2
Lines 413 413
===================================
Hits 291 291
Misses 122 122 |
hey @FlorentMeyer, mind check the file you uploaded, looks like it's too big and there might be some redundant stuff here. Might clean it up? |
not sure what happen but GH does not want to show me the diff :/ |
…yer/tutorials into fsd50K_speech_model_finetuning
Good evening, I should mention that the code in the converted notebook was exactly the same as in this Colab notebook (having removed the My last commit therefore makes these changes to the linked Colab notebook:
Changes to the .yaml file:
I'm just not sure whether the Pandas dataframes with the audio players will get rendered. |
for more information, see https://pre-commit.ci
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Outdated
Show resolved
Hide resolved
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Outdated
Show resolved
Hide resolved
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Show resolved
Hide resolved
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Show resolved
Hide resolved
# ## Compute metrics | ||
|
||
# %% id="zlTooqqp8FWk" | ||
mAP_micro = average_precision_score( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mind use torchmetrics
functional metrics?
https://torchmetrics.readthedocs.io/en/stable/classification/average_precision.html#functional-interface
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the time I wrote the code, torchmetrics.functional.average_precision
's target
took "integer labels" therefore not accepting multi-hot labels. Just let me check whether this was fixed and if I get the same results as with scikit-learn!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, the new implementations of multilabel_average_precision
give the same results as scikit-learn
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's use that :)
# ## Compute metrics | ||
|
||
# %% id="zlTooqqp8FWk" | ||
mAP_micro = average_precision_score( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, it looks like you have true values for preds, I'd recommend using test_step
instead to show the metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean? Something like this?
With on_step=False, on_epoch=True
to only log the end of the epoch according to https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#logging-from-a-lightningmodule:
The above config for validation applies for test hooks as well.
mAP_micro = average_precision_score( | |
# In __init__: | |
self.mAP = torchmetrics.classification.MultilabelAveragePrecision() | |
# In test_step: | |
self.mAP(preds, y) | |
self.log('mAP', self.mAP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the class version of torchmetrics should be prefered to functional I'd say?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay.. it's fine.. let's use functional metrics here since you already have all the targets and predictions.
modular metrics are useful, when you are aggregating the metrics, let's say on step-level
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Outdated
Show resolved
Hide resolved
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Outdated
Show resolved
Hide resolved
Co-authored-by: Rohit Gupta <[email protected]>
I see there are still problems with:
|
…yer/tutorials into fsd50K_speech_model_finetuning
for more information, see https://pre-commit.ci
OK I also saw that there were bizarre things happening in the notebook, looks like the pre-commit hooks are moving stuff around causing duplication every time I pull them into my own code before being able to push again (example) and it's easy to miss things when reading a notebook as a .py file. Anyway I read the whole file carefully and this should be fixed now. Also all cells have an ID so I'm not sure where the "cells are missing IDs" error comes from :/ |
Small up! |
lightning_examples/fsd50K-speech-model-finetuning/fsd50K_speech_model_finetuning.py
Outdated
Show resolved
Hide resolved
# name: python3 | ||
# --- | ||
|
||
# %% [markdown] id="CI0JECKA9AnY" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets remove all the ID
for more information, see https://pre-commit.ci
d4acf6f
to
ac8f7ba
Compare
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
seem the repo https://github.com/FlorentMeyer/fsd50k_speech_model_finetuning and this example is silent since 2022, so let's close it for now, but feel free to reopen any time 🦩 |
Before submitting
What does this PR do?
Add FSD50K Speech Model Fine-tuning Tutorial.
PR review
Did you have fun?
A lot 🙃