Install the requirements. On Google Colab, only jenkspy
and unidip
need to be intalled.
pip install -r requirements.txt
Run the download script to obtain the dataset used for the 18 classes we trained on.
sh download.sh
To train the resnet, run the following command.
python train_resnet.py --lr 0.0001 --epochs 50 --batch 1 --name default-experiment --momentum 0.9 --weightdecay 5e-4 --incrlr False --upperlr 0.01
The flags correspond to the following:
-
lr
= learning rate -
epochs
= number of epochs -
batch
= batch size (warning: RAM usable quickly increases as batch size increases) -
name
= what the model .pth file will be named -
momentum
= gradient descent momentum -
weightdecay
= gradient descent weight decay -
incrlr
= if included, then we increment the learning rate to the upper limitupperlr
. If omitted, we do not increment the learning rate. -
upperlr
= the upper limit learning rate we iteratively sum to
A similar command exists for training a population of WANN's with the NEAT algorithm.
python train_wann.py --epochs 50 --batch 24 --name default-experiment
Model files are generated for resnet and WANN training. For a name experiment-name
, you get model files experiment-name_detector.pth
and experiment-name_best_detector.pth
for resnet. For WANN, the file experiment-name.json
is generated.
The commands for testing are similar to training. To test the accuracy of the resnet and WANN, call python on the test_resnet.py
and test_wann.py
files.
General sources:
Specific sources:
- Data
- Google quickdraw
- TU-Berlin not used due to the existence of ambiguous categories. The pruned data set due to Schneider was apparently never released.
- numpy NEAT algorithm - ?
- Coding: