Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Source of validation accuracy in zero-cost case #133

Open
jr2021 opened this issue Sep 17, 2022 · 2 comments
Open

Source of validation accuracy in zero-cost case #133

jr2021 opened this issue Sep 17, 2022 · 2 comments
Assignees
Labels
zero cost merge Merge of zerocost with Develop into Develop_copy

Comments

@jr2021
Copy link
Collaborator

jr2021 commented Sep 17, 2022

In the zero-cost branch optimizers Npenas and Bananas, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:

model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']

The question is, whether this supports the case where the user wants to use the ZeroCost predictor because their dataset or search space is not supported by the zero-cost benchmark.

If this is a case that we want to support, one option would be to introduce a parameter use_zc_api and use it as follows:

if self.use_zc_api:
    model.accuracy = self.zc_api[str(model.arch_hash)]['val_accuracy']
else:
    model.accuracy = model.arch.query(
        self.performance_metric, self.dataset, dataset_api=self.dataset_api
    )
@jr2021 jr2021 added the zero cost merge Merge of zerocost with Develop into Develop_copy label Sep 17, 2022
@Neonkraft
Copy link
Collaborator

Neonkraft commented Sep 21, 2022

The code was written this way for the Zero-Cost NAS paper, where we consumed only search spaces for which the values were available in the zc_api. It would make more sense to give users the option to choose whether or not to query the zc_api, as you suggest.

@jr2021
Copy link
Collaborator Author

jr2021 commented Sep 21, 2022

Got it. Another sub-issue that came up is when to call query_zc_scores. The question is whether this function only be called under the following condition:

if self.zc and len(self.train_data) <= self.max_zerocost:
    ...

Or, is there a case where the zero-cost scores can be calculated after the self.max_zerocost parameter has been exceeded? We assume that this parameter refers to the maximum number of zero cost evaluations, so presumably the answer is no. What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
zero cost merge Merge of zerocost with Develop into Develop_copy
Projects
None yet
Development

No branches or pull requests

3 participants