This repository contains a reproduction of the research Diagnosing Model Performance Under Distribution Shift. Please note that this is not an official repository maintained by the original authors.
In this reproduction, we aim to recreate the experiments and results reported in the original research paper. We follow the methodology described in the paper as closely as possible, using the similar dataset, parameters, and evaluation metrics. The data used in the following code is different from the data used in the paper, so the results may not be completely identical. As the official code has not been released yet, please report any incorrect parts
The repository contains code for reproducing the experiments in the paper. All reproducible experiments are available in the form of Jupyter notebooks. Specifically, it includes:
- 4.1.1 Y | X shift: missing/unobserved covariates
- 4.1.2 X shift: selection bias in age
- Algorithm 1 (See main.py)
We welcome contributions to this repository, such as bug fixes or enhancements to the code. Please create a pull request with your changes, and we will review them as soon as possible.
If you have any questions or want to use the code, please contact [email protected]
.
We would like to express our gratitude to the authors of the original research paper for their important contributions to the field.
The original research paper was authored by:
- Tiffany (Tianhui) Cai, Department of Statistics, Columbia University
- Hongseok Namkoong, Department of Decision, Risk, and Operations Division, Columbia University
- Steve Yadlowsky, Brain Team, Google Research
In time series (regression) tasks, there are similar studies on the topic mentioned above. If you're interested, please check them out.
- RevIN, ICLR 2022
- Dish-TS, AAAI 2023
- Stable learning, AAAI 2020
- TGN, ICLR 2023