You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, very inspiring and novel work to the Graph LLM community. Yet, I have several questions regarding the paper details, especially the first stage of model training, which I hope can be clarified.
What is the [DEC] token in this case. Does it represent the overall token sequence of the textual input? or it is just a self-designed token at the start of the sequence to signal the decoding task?
The third objective in the first-stage training is to compute whether the query representation matches the textual representation through classifier, yet how do you define the two representations match or not? Does the output score the classifier indeed reflect the golden truth of whether they match?
I also try to find the answers of above questions via the original paper of 'BLIP', but they neither provide a clear illustration in their paper.
It would be greatly appreciated if you can help me understand these questions. Thank you!
The text was updated successfully, but these errors were encountered:
Dear authors,
First of all, very inspiring and novel work to the Graph LLM community. Yet, I have several questions regarding the paper details, especially the first stage of model training, which I hope can be clarified.
I also try to find the answers of above questions via the original paper of 'BLIP', but they neither provide a clear illustration in their paper.
It would be greatly appreciated if you can help me understand these questions. Thank you!
The text was updated successfully, but these errors were encountered: