-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about uncertainty based implementation #11
Comments
The code selects samples with large entropy. Code For example, there are two samples with predicted probability |
Thanks for your reply. I think there are some misunderstanding about function |
Hi, may I ask if this issue has been resolved? I agree with @Data-reindeer. It seems we are selecting "more confident" samples instead of "less confident" ones as mentioned in the report. |
…503) We had our own version of PatrickZH/DeepCore#11 because our version of their implementation confused where the inversion is placed. I thought it through and think we don't need to do any inversion. I added some comments explaining the thoughts. Note that this does not address PatrickZH/DeepCore#13!
Hi, Chengcheng Guo and Bo Zhao:
Thanks for your thorough research and clean codes. However, I have some questions about uncertainty based implementation.
As mentioned in the DeepCore paper, samples with lower confidence may have a greater impact on model optimization than those with higher confidence, and should therefore be included in the coreset. But the implementation here actually calculate the inverse scores of uncertainty.
Take entropy as an example,
np.log(preds + 1e-6) * preds
is the negative of the entropy, sonp.argsort(scores)[::-1][:self.coreset_size]
select the samples with low entropy (uncertainty). This confused me a lot, which shows inconsistant implementation with the statement in the paper. Is there some bugs in the implementation?Data-reindeer
The text was updated successfully, but these errors were encountered: