Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation metrics and code #11

Open
kayleeliyx opened this issue Jul 11, 2021 · 1 comment
Open

Evaluation metrics and code #11

kayleeliyx opened this issue Jul 11, 2021 · 1 comment

Comments

@kayleeliyx
Copy link

kayleeliyx commented Jul 11, 2021

I was trying to evaluate a model after training. I noticed that they didn't release the ground truth labels of the test dataset.

In the evaluation code provided by https://mmcheng.net/videosal/

I found the comments are "if the ground truth cannot be found, e.g. testing data, the central gaussian will be taken as ground truth automatically."

However, the real code is:

if exist(saliency_path, 'file')
       I = double(imread(saliency_path))/255;
       allMetrics(i) = fh( result, I);
else      
       allMetrics(i) = nan;
end

Then in the end,

allMetrics(isnan(allMetrics)) = [];
meanMetric = mean(allMetrics);

I'm wondering for test set without ground truth, how to generate "central gaussian "

Another question is, for the numbers listed on the board https://mmcheng.net/videosal/, are they tested on validation set or test set?

Thanks a lot for your help!

@chhanganivarun
Copy link

The evaluation code shared on the repository wasn't used. The original Matlab files shared in your link were only used. Further, the test results are shared as it is by the challenge handlers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants