-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About loss function #5
Comments
I have the same query. Can the authors please clarify? |
Hello! I wrote the contrastive learning part by following the instructions in the paper. However, when training the model only with the contrastive loss, the training IOU doesn't seem to improve. Below, I am attaching the code snippet and the training IOU and precision curves. The training is done only for 1 epoch. The brown plots are for cross-entropy loss while the blue plots are for contrastive loss. I would be grateful if you could let me know what I am doing wrong and also if the contrastive loss is supposed to be used in addition to cross-entropy loss.
|
please follow our implementation. Lines 47 to 84 in 0df39f0
|
Hello Derrick, I had seen this implementation. In your paper, you have mentioned equations 9 and 10 as the contrastive loss between pixel embeddings and the text features. I am not able to understand, how it is taken care of in your above code snippet? |
I have the same query. Can the authors please clarify? |
No follow up? looks like supervised learning on the code. I assume something is missing in the code. |
@DerrickWang005 Could you please realse the code snippet of contrastive learning loss? |
Actually, the implementation is in line with the description of the paper. However, this is actually not the standard contrastive learning. |
you may take a deeper look at codes mentioned by the author above, and you'll find that conv2d actually acts like element wise product between text and image which can be considered as equation 9&10. |
I have the same question, could the authors release the latest version of code? @DerrickWang005 |
I think this article can answer your question to some extent. @lyu-yx |
I have the same question. I couldn't find the code about contrastive loss. |
Me too... |
Hi, I found that the loss used in this repo is a cross-entropy loss between prediction and mask.
loss = F.binary_cross_entropy_with_logits(pred, mask)
But the loss mentioned in the paper is a contrastive loss between visual and textual features.
The text was updated successfully, but these errors were encountered: