We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lora_rank 大小 具体影响了啥?
The text was updated successfully, but these errors were encountered:
LoRA 将 transformer 中的 Attention Matrix 分解为两个低秩(Low Rank)向量,从而起到减少 GPU 显存占用的作用。
代码中的 --lora_rank 参数用于设定被分解的向量的矩阵秩为多少,你可以在 源码 里看到对应的解释,也可以在 论文 里找到更具体的细节。
--lora_rank
这意味着,您可以通过修改这个参数来调整训练时所消耗的资源(但这可能会对最终的结果产生一定的影响)。
Sorry, something went wrong.
如果需要全量 得改哪个部分?
No branches or pull requests
lora_rank 大小 具体影响了啥?
The text was updated successfully, but these errors were encountered: