Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I've computed histograms of the ground truth and predicted scores #50

Open
hcl14 opened this issue Dec 17, 2018 · 5 comments
Open

I've computed histograms of the ground truth and predicted scores #50

hcl14 opened this issue Dec 17, 2018 · 5 comments

Comments

@hcl14
Copy link

hcl14 commented Dec 17, 2018

Histograms of the ground truth and predicted scores from the article, p.7
original

And did the same for two models here:
titu1994

It shows that MobileNet will show correct scores for very few ground truth images with score <4 and > 7.

Better one is this implementation (mobilenet): https://github.com/idealo/image-quality-assessment

mobilenet

Actually, I'm having troubles myself trying to fit MobileNet2, I'm getting something similar to your mobilenet image.

My histograms are bult on 0.1 subset of entire set.

@titu1994
Copy link
Owner

I don't have the resources to train these models so I won't be able to improve them.

I wonder whether the difference lies in the loss function or the amount of training.

@hcl14
Copy link
Author

hcl14 commented Dec 17, 2018

I did not succeed in training with SGD with lr=10-7 for base net and 10-6 for last layer, as authors. It just does not converge. I try to use Adam with oversampling of underrepresented images with mean <4 and >7, but no success, I just get thin shifted spike for variance and something like your picture for mean.

@titu1994
Copy link
Owner

I kind of guessed that the provided learning rates in the paper were too low to be of any use, which is why I switch to Adam with higher learning rates.

I have to analyse the repository you posted, to see what the difference is between my implementation and theirs.

@hcl14
Copy link
Author

hcl14 commented Dec 17, 2018

They seem to use Adam with following parameters:

  "batch_size": 96,
  "epochs_train_dense": 5,
  "learning_rate_dense": 0.001,
  "decay_dense": 0,
  "epochs_train_all": 9,
  "learning_rate_all": 0.00003,
  "decay_all": 0.000023,

Did not study closely though. Will try to replicate this for my MobileNetV2

@hcl14
Copy link
Author

hcl14 commented Dec 18, 2018

This keras implementation:
https://github.com/truskovskiyk/nima.pytorch
They do adjusting images with ImageNet mean and variance, and use Adam with lr=1e-4.
image

Histogram is built on 0.3 subset of entire set.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants