Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ADD] Optimise on custom metric #486

Open
wants to merge 5 commits into
base: development
Choose a base branch
from

Conversation

ravinkohli
Copy link
Contributor

@ravinkohli ravinkohli commented Dec 19, 2022

This PR enables the user to add their own custom metric.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Note that a Pull Request should only contain one of refactoring, new features or documentation changes.
Please separate these changes and send us individual PRs for each.
For more information on how to create a good pull request, please refer to The anatomy of a perfect pull request.

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • Have you added an explanation of what your changes do and why you'd like us to include them?
  • Have you written new tests for your core changes, as applicable?
  • Have you successfully ran tests with your changes locally?

Description

This PR adds a function, add_metric which allows the user to pass any custom metric and make it available to AutoPyTorch for HPO. It also includes an example on how to use this functionality.

Motivation and Context

Fixes #477

How has this been tested?

I have added a test which checks if the given metric is available in the metrics for the current task type.

@codecov
Copy link

codecov bot commented Dec 19, 2022

Codecov Report

Base: 85.28% // Head: 28.03% // Decreases project coverage by -57.25% ⚠️

Coverage data is based on head (1b1f07b) compared to base (17cdc09).
Patch coverage: 40.00% of modified lines in pull request are covered.

Additional details and impacted files
@@               Coverage Diff                @@
##           development     #486       +/-   ##
================================================
- Coverage        85.28%   28.03%   -57.26%     
================================================
  Files              232      232               
  Lines            16456    16467       +11     
  Branches          3048     2733      -315     
================================================
- Hits             14034     4616     -9418     
- Misses            1573    11849    +10276     
+ Partials           849        2      -847     
Impacted Files Coverage Δ
...tup/traditional_ml/traditional_learner/learners.py 33.55% <0.00%> (-47.66%) ⬇️
...orch/pipeline/components/training/metrics/utils.py 11.11% <12.50%> (-76.89%) ⬇️
...ch/pipeline/components/training/metrics/metrics.py 76.47% <100.00%> (-12.77%) ⬇️
...tup/network_backbone/forecasting_backbone/cells.py 8.56% <0.00%> (-83.80%) ⬇️
...cessing/time_series_preprocessing/scaling/utils.py 8.91% <0.00%> (-83.17%) ⬇️
...mponents/setup/forecasting_target_scaling/utils.py 8.42% <0.00%> (-83.16%) ⬇️
...omponents/setup/network_backbone/ResNetBackbone.py 19.62% <0.00%> (-80.38%) ⬇️
...mponents/setup/network/forecasting_architecture.py 10.08% <0.00%> (-79.47%) ⬇️
...twork_head/forecasting_network_head/NBEATS_head.py 18.84% <0.00%> (-78.27%) ⬇️
autoPyTorch/datasets/time_series_dataset.py 12.89% <0.00%> (-77.74%) ⬇️
... and 209 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

###############################################################################
# Define and add custom score function
# ====================================
def score_function(y_test, y_pred):
Copy link
Contributor

@dengdifan dengdifan Dec 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Time Series Tasks, an additional kwargs must be attached:
https://github.com/sktime/sktime/blob/main/sktime/performance_metrics/forecasting/_functions.py#L179

Should we create a new example for this, or do we add another docstring here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add it to the same example. We can divide this example into two parts- for tabular tasks and for time series tasks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants