-
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 3 replies
-
There is SMAPE which is fairly similar. It is also a percentage accuracy but has a slightly adjustment to reduce the impact of outlier skew a bit. I'll likely be adding MASE to the next release, which can also serve a very similar purpose. Is there some particular advantage WAPE has over SMAPE for you? |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Thanks for your response, pretty interesting. I will have a look at the
metrics you suggested.
BTW, I am working on a staffing use case. The client is used to the WAPE
metric and uses it as a reference to compare results. Indeed, it is not the
most appropriate.
However, there is still some flexibility to carry over the remaining charge
(which is forecasted) from one day to the next.
I may try to explain to them that this is not the most appropriate
metric ...
Kind regards
Le jeu. 28 mars 2024 à 15:07, Colin Catlin ***@***.***> a
écrit :
… Fair enough, we have two cases:
1. Where we want the scaling to be specific to each individual data
point, because we care about error in perspective of the day it occurs
(staffing might be a use case here, because a day with small staff can be
more easily strained by a small amount than a day with larger staff, which
would have more flexibility from an absolute perspective)
2. Where we want the scaling to be specific to the overall series (an
example use case here is inventory in many cases, where since products are
bought for the longer term, we care more about the total than a few units
on one day, even if it is a big percentage error for that day)
So you are right that SMAPE has written is more for Case 1 above.
Several metrics are scaled as per Case 2 already here:
- uwmse
- wasserstein
- dwd
- dwae
- made
- matse
- spl
all currently use the same scaler except matse which is slightly different.
I would recommend uwmse (a weighted and scaled version of mean square
error) and wasserstein (a useful distance metric where some flexibility
across time is present) and spl (especially for probabilistic forecasting)
as the best metrics to start with.
—
Reply to this email directly, view it on GitHub
<#236 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHFYR7HK3NDA3BUXSXMBCGLY2QP3BAVCNFSM6AAAAABFMPAI46VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNBRGE2TG>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thanks, I will try it.
Kind regards
Le jeu. 28 mars 2024 à 15:43, Colin Catlin ***@***.***> a
écrit :
… You can perform model selection with one of the built in metrics, then
post-hoc calculate the metrics accordingly:
model = AutoTS(...).fit(...)
# by default this reruns the chosen model on the used validation holdouts, but has options for other models by IDmodel.plot_validations()# or use the hidden function model._validation_forecasts() directly
# you can access the actual data from the validations now in the dictionary:model.validation_forecasts# ... now compare against actual history with your own metric function
it's not the best documented because it's fairly new and mostly an
internal use function, but should allow you to see what the accuracy on the
given holdouts would have been for the chosen model using your own metric
—
Reply to this email directly, view it on GitHub
<#236 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHFYR7DBJ2ZMRC75EIF2NVTY2QUBHAVCNFSM6AAAAABFMPAI46VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DSNBRGU3TI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I realized that optimizing WAPE is equivalent to optimizing Mean Absolute Error, as the denominators of these two metrics are constant, and as their numerators are the same. EDIT: it is true in the case of a single time series (see answer below) |
Beta Was this translation helpful? Give feedback.
I realized that optimizing WAPE is equivalent to optimizing Mean Absolute Error, as the denominators of these two metrics are constant, and as their numerators are the same.
EDIT: it is true in the case of a single time series (see answer below)