Releases: 1e100/mnasnet_trainer
Update MNASNet 0.5 checkpoint
Same checkpoint as in 0.2, re-saved with the new backward compatibility data. Refer to pytorch/vision#1224 for details.
Bugfixes and new pretrained MNASNet 0.5
Previous release contained a bug: the initial few layers weren't scaled in accordance with the width multiplier. This release fixes the bug.
There is also a bit of a discrepancy between two of Google's own implementations: one they ship with TFLite and the TPU implementation. The former has no ReLU after the first batchnorm. The latter has the ReLU.
Because the TPU reference implementation posts higher top1, I have added the ReLU and retrained the 0.5 model to a slightly higher accuracy.
Bugfixes aren't yet merged into TorchVision. In the meanwhile please refer to my fork at: https://github.com/1e100/vision
Pre-trained MNASNet 0.5 and 1.0
This release contains the following ImageNet classifiers:
- MNASNet 1.0 with top1 of 73.512
- MNASNet 0.5 with top1 of 67.592
Along with the training code.