You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Exact inversion as described in section 3.2 of evensen2019 shows that for the case of a diagonal error covariance matrix we only need to invert a matrix of dimension equal to ensemble_size x ensemble_size.
The matrix is $C=S^TS + I_N$ and is assumed well-conditioned because we are adding ones to the diagonal of $S^TS$.
See https://github.com/equinor/iterative_ensemble_smoother/blob/main/tests/test_creation.py#L90 for an extreme example of when this is not true.
In this extreme example, the matrix is singular and inversion fails.
I think there are less extreme cases where $C$ is ill-conditioned and may benefit from a bit of truncation.
Appendix A: Inversion and rescaling of emerick2012 talks about how $C$ may be poorly scaled because it is constructed based on data with different magnitudes and they introduce a method of scaling before doing TSVD.
How much truncation should be applied? As little as possible I think. emerick2012 uses 99.9%.
The text was updated successfully, but these errors were encountered:
I see that in somewhat "extreme" cases caused by significant outliers in the ensemble prediction, we can have situations with poor conditioning of the C-matrix. Generally, these cases should be picked up and "eliminated" before analysis, but that is not always straightforward.
I admit that I should have used an SVD with truncation when implementing this method, as this would add a robustness layer to the algorithm. I already compute the inversion using an SVD, so adding a truncation at 99.9% of the variance will be simple to do, and it will prevent issues such as the one in the test problem above from crashing ert.
A printout of info to the log file whenever the truncation is active would be informative.
I see that in somewhat "extreme" cases caused by significant outliers in the ensemble prediction, we can have situations with poor conditioning of the C-matrix. Generally, these cases should be picked up and "eliminated" before analysis, but that is not always straightforward.
I admit that I should have used an SVD with truncation when implementing this method, as this would add a robustness layer to the algorithm. I already compute the inversion using an SVD, so adding a truncation at 99.9% of the variance will be simple to do, and it will prevent issues such as the one in the test problem above from crashing ert.
A printout of info to the log file whenever the truncation is active would be informative.
I agree that the we need a better outlier detection and have created an issue to address the problem.
But as you write, adding a bit of trunction increases robustness and is simple to implement so we will do that.
As for the print-out, we have a milestone planned to provide users with much more information as to what happened during analysis and will definitely add information regarding truncation.
Thank you.
dafeda
changed the title
Consider using TSVD even for exact inversion
Use TSVD even for exact inversion
May 7, 2023
Exact inversion as described in section 3.2 of
evensen2019
shows that for the case of a diagonal error covariance matrix we only need to invert a matrix of dimension equal toensemble_size
xensemble_size
.The matrix is$C=S^TS + I_N$ and is assumed well-conditioned because we are adding ones to the diagonal of $S^TS$ .$C$ is ill-conditioned and may benefit from a bit of truncation.
See https://github.com/equinor/iterative_ensemble_smoother/blob/main/tests/test_creation.py#L90 for an extreme example of when this is not true.
In this extreme example, the matrix is singular and inversion fails.
I think there are less extreme cases where
Appendix A: Inversion and rescaling
ofemerick2012
talks about howHow much truncation should be applied? As little as possible I think.
emerick2012
uses 99.9%.The text was updated successfully, but these errors were encountered: