We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你们好,想请教下关于复现你们论文中提到的siglip384版本vision encoder训练的问题。 siglip-SO400-patch14-384 的 patch 大小为 14,384//14=27(向下取整),所以它 embedding 层的大小就是 27*27=729 (即729个img tokens),因此得到的 hidden_state 大小为 batch_size x 729 x 1152 (1152为hidden_size) 然后,当用 codebook 离散化的时候需要将特征 reshape 为 hidden_states.reshape(B, int(L^0.5), int(L^0.5), C) 进而它的大小就变成了 batchsize x 27 x 27 x 1152,然后因为 rqvae 是16倍下采样,decode回像素空间后图像长宽大小就变为了 27 x 16=432。 输入大小是 384 x 384,输出大小变成了 432 x 432,求重建 loss 会有问题,想请教下你们是如何解决这个冲突的。
The text was updated successfully, but these errors were encountered:
No branches or pull requests
你们好,想请教下关于复现你们论文中提到的siglip384版本vision encoder训练的问题。
siglip-SO400-patch14-384 的 patch 大小为 14,384//14=27(向下取整),所以它 embedding 层的大小就是 27*27=729 (即729个img tokens),因此得到的 hidden_state 大小为 batch_size x 729 x 1152 (1152为hidden_size)
然后,当用 codebook 离散化的时候需要将特征 reshape 为 hidden_states.reshape(B, int(L^0.5), int(L^0.5), C)
进而它的大小就变成了 batchsize x 27 x 27 x 1152,然后因为 rqvae 是16倍下采样,decode回像素空间后图像长宽大小就变为了 27 x 16=432。
输入大小是 384 x 384,输出大小变成了 432 x 432,求重建 loss 会有问题,想请教下你们是如何解决这个冲突的。
The text was updated successfully, but these errors were encountered: