@@ -13,8 +13,7 @@ This repository is the official implementation of [Learning Diverse and Discrimi
13
13
- Augmentations is used in unsupervised and contrastive setting. Check [ ` augmentloader.py ` ] ( ./augmentloader.py ) for implementation details.
14
14
15
15
### Supervised Setting
16
- #### Training Options
17
- - Supervised Setting
16
+
18
17
```
19
18
usage: train_sup.py [-h] [--arch ARCH] [--fd FD] [--data DATA] [--epo EPO]
20
19
[--bs BS] [--lr LR] [--mom MOM] [--wd WD] [--gam1 GAM1]
@@ -61,7 +60,7 @@ $ python3 train_sup.py --arch resnet18stlsmall2 --data stl10_sup --fd 128 --epo
61
60
```
62
61
63
62
### Self-supervised Setting
64
- #### Training Options
63
+
65
64
```
66
65
usage: train_selfsup.py [-h] [--arch ARCH] [--fd FD] [--data DATA] [--epo EPO]
67
66
[--bs BS] [--aug AUG] [--lr LR] [--mom MOM] [--wd WD]
@@ -109,7 +108,9 @@ $ python3 train_selfsup.py --arch resnet18stlsmall --data stl10 --fd 128 --epo 5
109
108
110
109
## Evaluation
111
110
Testing methods available are: ` svm ` , ` knn ` , ` nearsub ` , ` kmeans ` , ` ensc ` . Each method also has options for testing hyperparameters, such as ` --k ` for top ` k ` components in kNN. Methods can also be chained. Checkpoint can also be specified using ` --epoch ` option. Please refer to [ ` evaluate.py ` ] ( ./evaluate.py ) and [ ` cluster.py ` ] ( ./cluster.py ) and for more implementation details.
111
+
112
112
- Command Options
113
+
113
114
```
114
115
usage: evaluate.py [-h] [--model_dir MODEL_DIR] [--svm] [--knn] [--nearsub]
115
116
[--kmeans] [--ensc] [--epoch EPOCH] [--k K] [--n N]
@@ -134,18 +135,34 @@ optional arguments:
134
135
--data_dir DATA_DIR path to dataset
135
136
```
136
137
- An example for evaluation:
138
+
137
139
```
138
140
$ python3 evaluate.py --knn --nearsub --k 10 --model_dir saved_models/sup_resnet18+128_cifar10_epo500_bs1000_lr0.001_mom0.9_wd0.0005_gam11.0_gam210.0_eps0.5_lcr0
139
141
```
140
142
, which runs kNN with top 10 components and nearest subspace on the latest checkpoint in ` model_dir ` .
141
143
142
144
143
- ## Pretrain Models
145
+ ## Reproduce Results in Paper
144
146
147
+ ### Commands for Supervised Learning Setting
148
+
149
+ ```
150
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.0
151
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.1
152
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.2
153
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.3
154
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.4
155
+ $ python3 train_sup.py --arch resnet18 --data cifar10 --fd 128 --epo 500 --bs 1000 --eps 0.5 --gam1 1 --gam2 1 --lr 0.01 --lcr 0.5
156
+ ```
157
+
158
+ ### Commands for Self-supervised Learning Setting
159
+ ```
160
+ $ python3 train_selfsup.py --arch resnet18selfsup --data cifar10 --fd 128 --epo 100 --bs 1000 --eps 0.5 --gam1 20 --gam2 0.05 --lr 0.1 --aug 50 --transform cifar
161
+ $ python3 train_selfsup.py --arch resnet18selfsup --data cifar100 --fd 128 --epo 100 --bs 1000 --eps 0.5 --gam1 20 --gam2 0.05 --lr 0.1 --aug 50 --transform cifar
162
+ $ python3 train_selfsup.py --arch resnet18stl --data stl10 --fd 128 --epo 100 --bs 1000 --eps 0.5 --gam1 20 --gam2 0.05 --lr 0.1 --aug 50 --transform stl10
163
+ ```
145
164
146
165
147
166
## Lisence and Contributing
148
167
- This README is formatted based on [ paperswithcode] ( https://github.com/paperswithcode/releasing-research-code ) .
149
168
- Feel free to post issues via Github.
150
-
151
-
0 commit comments