Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Several questions regarding the paper #11

Open
JunyiChE opened this issue May 25, 2024 · 0 comments
Open

Several questions regarding the paper #11

JunyiChE opened this issue May 25, 2024 · 0 comments

Comments

@JunyiChE
Copy link

Dear authors,

First of all, very inspiring and novel work to the Graph LLM community. Yet, I have several questions regarding the paper details, especially the first stage of model training, which I hope can be clarified.

  1. What is the [DEC] token in this case. Does it represent the overall token sequence of the textual input? or it is just a self-designed token at the start of the sequence to signal the decoding task?
  2. The third objective in the first-stage training is to compute whether the query representation matches the textual representation through classifier, yet how do you define the two representations match or not? Does the output score the classifier indeed reflect the golden truth of whether they match?

I also try to find the answers of above questions via the original paper of 'BLIP', but they neither provide a clear illustration in their paper.

It would be greatly appreciated if you can help me understand these questions. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant