Replies: 10 comments
-
Hi @mubali101 !
|
Beta Was this translation helpful? Give feedback.
-
Hi! I'm one of the co-authors of this method. @aulemahal already did a good job at answering your questions. As he mentioned, the version of the code here in A few relevant details:
EDIT: Another potential issue is that a multiplicative adjustment has been hard-coded, since it was designed for precipitation. This is probably not adequate for temperatures (we usually use additive correction factors for those). Here too, you'll probably want to look at what the function is doing to make sure the results are realistic. |
Beta Was this translation helpful? Give feedback.
-
Hi @RondeauG @aulemahal, Thanks for your explanations, really appreciate it.
|
Beta Was this translation helpful? Give feedback.
-
Indeed, after many optimization attemps, the code is now a bit complex and hard to follow... In theory we have a gitter channel : https://app.gitter.im/#/room/#Ouranosinc_xclim:gitter.im. It's not very dynamic, but I think the core team will receive notifications. |
Beta Was this translation helpful? Give feedback.
-
Diving into the code a bit more, It is used once to subset data that is greater than the value (aka, remove drizzle with |
Beta Was this translation helpful? Give feedback.
-
@RondeauG @aulemahal an update on this:
|
Beta Was this translation helpful? Give feedback.
-
Hi! I can't really comment on |
Beta Was this translation helpful? Give feedback.
-
I didn't know about I think it would mainly consist in wrapping the numpy functions of |
Beta Was this translation helpful? Give feedback.
-
@aulemahal @RondeauG many thanks for your replies. I tested On # using 10K quantiles
QM = sdba.EmpiricalQuantileMapping.train(
ref=era5_data, hist=model_data, nquantiles=10000, group="time", kind="+"
)
# interp="cubic" doesn't differ for tails
QM.adjust(model_data, extrapolation="constant", interp="linear")
qm_model_adjusted = QM.adjust(model_data, extrapolation="constant", interp="linear") But it is not clear to me how exactly it is calibrating the values that are more extreme than the reference data. Probably similar to this issue. Does I have added a figure of Exceedance Probability plot from the calibrated data with different nquantile values used (100, 10K, 500K) with more extreme values on lower left, the orange curve being the reference data (ERA-5). It shows that tail calibration is quite decent as long as nquantiles is not too high, else the extremes will start clipping. |
Beta Was this translation helpful? Give feedback.
-
Do you have access to pandas >= 2 ? Because, with this version you may be able to have datetimes exceeding the usual 1642-2260 limitations. I don't recall the actual conversion method, but it has to do with the "unit" property of the Timestamp object. When set to 's', the time is stored as an int64 of seconds elapsed since 1970-01-01, which allows years after 2260. Xarray might not support having a datetime axis of that type, but a DataFrame should be usable. It still doesn't support non-standard calendars though. Indeed, the documentation needs an update. The extrapolation was moved to the factor interpolation function itself. In your case (no special grouping), that's here : Lines 323 to 327 in 473a3e0 With "constant", it will simply apply the nearest (last or first) adjustment factor. With this in mind, I am not surprised that you get stranger results as you increase the number of quantiles. When there are "too many" quantiles, the factor are not a smooth-ish function anymore but they get very noisy. Then the last factor can be quite unstable. |
Beta Was this translation helpful? Give feedback.
-
Setup Information
Context
Edit: I'm trying to understand the usage of
xclim.sdba.adjustment.ExtremeValues
by using the tasmax from the tutorial dataset.cluster_thresh
means the q_thresh percentile is calculated on the values exceeding the cluster_thresh? And how to set different cluster_thresh for different locations?Thank you!
Code of Conduct
Beta Was this translation helpful? Give feedback.
All reactions