diff --git a/README.md b/README.md
index 3342e5e..673c4ff 100644
--- a/README.md
+++ b/README.md
@@ -181,30 +181,31 @@ You can find all available model IDs in the table below (note that the full lead
| **1** | **Gowal2020Uncovering_70_16_extra** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 91.10% | 65.87% | WideResNet-70-16 | arXiv, Oct 2020 |
| **2** | **Gowal2020Uncovering_28_10_extra** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 89.48% | 62.76% | WideResNet-28-10 | arXiv, Oct 2020 |
| **3** | **Wu2020Adversarial_extra** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 88.25% | 60.04% | WideResNet-28-10 | NeurIPS 2020 |
-| **4** | **Carmon2019Unlabeled** | *[Unlabeled Data Improves Adversarial Robustness](https://arxiv.org/abs/1905.13736)* | 89.69% | 59.53% | WideResNet-28-10 | NeurIPS 2019 |
-| **5** | **Sehwag2021Proxy** | *[Improving Adversarial Robustness Using Proxy Distributions](https://arxiv.org/abs/2104.09425)* | 85.85% | 59.09% | WideResNet-34-10 | arXiv, Apr 2021 |
-| **6** | **Sehwag2020Hydra** | *[HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509)* | 88.98% | 57.14% | WideResNet-28-10 | NeurIPS 2020 |
-| **7** | **Gowal2020Uncovering_70_16** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.29% | 57.14% | WideResNet-70-16 | arXiv, Oct 2020 |
-| **8** | **Gowal2020Uncovering_34_20** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.64% | 56.82% | WideResNet-34-20 | arXiv, Oct 2020 |
-| **9** | **Wang2020Improving** | *[Improving Adversarial Robustness Requires Revisiting Misclassified Examples](https://openreview.net/forum?id=rklOg6EFwS)* | 87.50% | 56.29% | WideResNet-28-10 | ICLR 2020 |
-| **10** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 85.36% | 56.17% | WideResNet-34-10 | NeurIPS 2020 |
-| **11** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 87.11% | 54.92% | WideResNet-28-10 | ICML 2019 |
-| **12** | **Sehwag2021Proxy_R18** | *[Improving Adversarial Robustness Using Proxy Distributions](https://arxiv.org/abs/2104.09425)* | 84.38% | 54.43% | ResNet-18 | arXiv, Apr 2021 |
-| **13** | **Pang2020Boosting** | *[Boosting Adversarial Training with Hypersphere Embedding](https://arxiv.org/abs/2002.08619)* | 85.14% | 53.74% | WideResNet-34-20 | NeurIPS 2020 |
-| **14** | **Cui2020Learnable_34_20** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.70% | 53.57% | WideResNet-34-20 | arXiv, Nov 2020 |
-| **15** | **Zhang2020Attacks** | *[Attacks Which Do Not Kill Training Make Adversarial Learning Stronger](https://arxiv.org/abs/2002.11242)* | 84.52% | 53.51% | WideResNet-34-10 | ICML 2020 |
-| **16** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 85.34% | 53.42% | WideResNet-34-20 | ICML 2020 |
-| **17** | **Huang2020Self** | *[Self-Adaptive Training: beyond Empirical Risk Minimization](https://arxiv.org/abs/2002.10319)* | 83.48% | 53.34% | WideResNet-34-10 | NeurIPS 2020 |
-| **18** | **Zhang2019Theoretically** | *[Theoretically Principled Trade-off between Robustness and Accuracy](https://arxiv.org/abs/1901.08573)* | 84.92% | 53.08% | WideResNet-34-10 | ICML 2019 |
-| **19** | **Cui2020Learnable_34_10** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.22% | 52.86% | WideResNet-34-10 | arXiv, Nov 2020 |
-| **20** | **Chen2020Adversarial** | *[Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning](https://arxiv.org/abs/2003.12862)* | 86.04% | 51.56% | ResNet-50
(3x ensemble) | CVPR 2020 |
-| **21** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 85.32% | 51.12% | WideResNet-34-10 | arXiv, Oct 2020 |
-| **22** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 86.84% | 50.72% | WideResNet-34-10 | arXiv, Mar 2020 |
-| **23** | **Engstrom2019Robustness** | *[Robustness library](https://github.com/MadryLab/robustness)* | 87.03% | 49.25% | ResNet-50 | GitHub,
Oct 2019 |
-| **24** | **Zhang2019You** | *[You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle](https://arxiv.org/abs/1905.00877)* | 87.20% | 44.83% | WideResNet-34-10 | NeurIPS 2019 |
-| **25** | **Wong2020Fast** | *[Fast is better than free: Revisiting adversarial training](https://arxiv.org/abs/2001.03994)* | 83.34% | 43.21% | ResNet-18 | ICLR 2020 |
-| **26** | **Ding2020MMA** | *[MMA Training: Direct Input Space Margin Maximization through Adversarial Training](https://openreview.net/forum?id=HkeryxBtPB)* | 84.36% | 41.44% | WideResNet-28-4 | ICLR 2020 |
-| **27** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 0.00% | WideResNet-28-10 | N/A |
+| **4** | **Zhang2020Geometry** | *[Geometry-aware Instance-reweighted Adversarial Training](https://arxiv.org/abs/2010.01736)* | 89.36% | 59.64% | WideResNet-28-10 | ICLR 2021 |
+| **5** | **Carmon2019Unlabeled** | *[Unlabeled Data Improves Adversarial Robustness](https://arxiv.org/abs/1905.13736)* | 89.69% | 59.53% | WideResNet-28-10 | NeurIPS 2019 |
+| **6** | **Sehwag2021Proxy** | *[Improving Adversarial Robustness Using Proxy Distributions](https://arxiv.org/abs/2104.09425)* | 85.85% | 59.09% | WideResNet-34-10 | arXiv, Apr 2021 |
+| **7** | **Sehwag2020Hydra** | *[HYDRA: Pruning Adversarially Robust Neural Networks](https://arxiv.org/abs/2002.10509)* | 88.98% | 57.14% | WideResNet-28-10 | NeurIPS 2020 |
+| **8** | **Gowal2020Uncovering_70_16** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.29% | 57.14% | WideResNet-70-16 | arXiv, Oct 2020 |
+| **9** | **Gowal2020Uncovering_34_20** | *[Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples](https://arxiv.org/abs/2010.03593)* | 85.64% | 56.82% | WideResNet-34-20 | arXiv, Oct 2020 |
+| **10** | **Wang2020Improving** | *[Improving Adversarial Robustness Requires Revisiting Misclassified Examples](https://openreview.net/forum?id=rklOg6EFwS)* | 87.50% | 56.29% | WideResNet-28-10 | ICLR 2020 |
+| **11** | **Wu2020Adversarial** | *[Adversarial Weight Perturbation Helps Robust Generalization](https://arxiv.org/abs/2004.05884)* | 85.36% | 56.17% | WideResNet-34-10 | NeurIPS 2020 |
+| **12** | **Hendrycks2019Using** | *[Using Pre-Training Can Improve Model Robustness and Uncertainty](https://arxiv.org/abs/1901.09960)* | 87.11% | 54.92% | WideResNet-28-10 | ICML 2019 |
+| **13** | **Sehwag2021Proxy_R18** | *[Improving Adversarial Robustness Using Proxy Distributions](https://arxiv.org/abs/2104.09425)* | 84.38% | 54.43% | ResNet-18 | arXiv, Apr 2021 |
+| **14** | **Pang2020Boosting** | *[Boosting Adversarial Training with Hypersphere Embedding](https://arxiv.org/abs/2002.08619)* | 85.14% | 53.74% | WideResNet-34-20 | NeurIPS 2020 |
+| **15** | **Cui2020Learnable_34_20** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.70% | 53.57% | WideResNet-34-20 | arXiv, Nov 2020 |
+| **16** | **Zhang2020Attacks** | *[Attacks Which Do Not Kill Training Make Adversarial Learning Stronger](https://arxiv.org/abs/2002.11242)* | 84.52% | 53.51% | WideResNet-34-10 | ICML 2020 |
+| **17** | **Rice2020Overfitting** | *[Overfitting in adversarially robust deep learning](https://arxiv.org/abs/2002.11569)* | 85.34% | 53.42% | WideResNet-34-20 | ICML 2020 |
+| **18** | **Huang2020Self** | *[Self-Adaptive Training: beyond Empirical Risk Minimization](https://arxiv.org/abs/2002.10319)* | 83.48% | 53.34% | WideResNet-34-10 | NeurIPS 2020 |
+| **19** | **Zhang2019Theoretically** | *[Theoretically Principled Trade-off between Robustness and Accuracy](https://arxiv.org/abs/1901.08573)* | 84.92% | 53.08% | WideResNet-34-10 | ICML 2019 |
+| **20** | **Cui2020Learnable_34_10** | *[Learnable Boundary Guided Adversarial Training](https://arxiv.org/abs/2011.11164)* | 88.22% | 52.86% | WideResNet-34-10 | arXiv, Nov 2020 |
+| **21** | **Chen2020Adversarial** | *[Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning](https://arxiv.org/abs/2003.12862)* | 86.04% | 51.56% | ResNet-50
(3x ensemble) | CVPR 2020 |
+| **22** | **Chen2020Efficient** | *[Efficient Robust Training via Backward Smoothing](https://arxiv.org/abs/2010.01278)* | 85.32% | 51.12% | WideResNet-34-10 | arXiv, Oct 2020 |
+| **23** | **Sitawarin2020Improving** | *[Improving Adversarial Robustness Through Progressive Hardening](https://arxiv.org/abs/2003.09347)* | 86.84% | 50.72% | WideResNet-34-10 | arXiv, Mar 2020 |
+| **24** | **Engstrom2019Robustness** | *[Robustness library](https://github.com/MadryLab/robustness)* | 87.03% | 49.25% | ResNet-50 | GitHub,
Oct 2019 |
+| **25** | **Zhang2019You** | *[You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle](https://arxiv.org/abs/1905.00877)* | 87.20% | 44.83% | WideResNet-34-10 | NeurIPS 2019 |
+| **26** | **Wong2020Fast** | *[Fast is better than free: Revisiting adversarial training](https://arxiv.org/abs/2001.03994)* | 83.34% | 43.21% | ResNet-18 | ICLR 2020 |
+| **27** | **Ding2020MMA** | *[MMA Training: Direct Input Space Margin Maximization through Adversarial Training](https://openreview.net/forum?id=HkeryxBtPB)* | 84.36% | 41.44% | WideResNet-28-4 | ICLR 2020 |
+| **28** | **Standard** | *[Standardly trained model](https://github.com/RobustBench/robustbench/)* | 94.78% | 0.00% | WideResNet-28-10 | N/A |
#### L2
diff --git a/model_info/cifar10/Linf/Zhang2020Geometry.json b/model_info/cifar10/Linf/Zhang2020Geometry.json
new file mode 100644
index 0000000..d0171bf
--- /dev/null
+++ b/model_info/cifar10/Linf/Zhang2020Geometry.json
@@ -0,0 +1,15 @@
+{
+ "link": "https://arxiv.org/abs/2010.01736",
+ "name": "Geometry-aware Instance-reweighted Adversarial Training",
+ "authors": "Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli",
+ "additional_data": true,
+ "number_forward_passes": 1,
+ "dataset": "cifar10",
+ "venue": "ICLR 2021",
+ "architecture": "WideResNet-28-10",
+ "eps": "8/255",
+ "clean_acc": "89.36",
+ "reported": "59.64",
+ "footnote": "Uses \\(\\ell_{\\infty} \\) = 0.031 \u2248 7.9/255 instead of 8/255.",
+ "autoattack_acc": "59.64"
+}
diff --git a/robustbench/model_zoo/cifar10.py b/robustbench/model_zoo/cifar10.py
index 715e78e..b95b158 100644
--- a/robustbench/model_zoo/cifar10.py
+++ b/robustbench/model_zoo/cifar10.py
@@ -390,7 +390,11 @@ def forward(self, x):
('Cui2020Learnable_34_10', {
'model': lambda: WideResNet(depth=34, widen_factor=10, sub_block1=True),
'gdrive_id': '16s9pi_1QgMbFLISVvaVUiNfCzah6g2YV'
- })
+ }),
+ ('Zhang2020Geometry', {
+ 'model': lambda: WideResNet(depth=28, widen_factor=10, sub_block1=True),
+ 'gdrive_id': '1UoG1JhbAps1MdMc6PEFiZ2yVXl_Ii5Jk'
+ }),
])
l2 = OrderedDict([