Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

二次微调模型 #1339

Open
lanlandawang opened this issue Dec 30, 2024 · 1 comment
Open

二次微调模型 #1339

lanlandawang opened this issue Dec 30, 2024 · 1 comment
Labels

Comments

@lanlandawang
Copy link

对 Qwen-7B-Chat 进行 Lora 微调以及模型合并后,得到自己的一个新的模型。但是我想对这个新的模型进行二次微调,提示 GPU 内存超出,这个是因为什么原因呢,是什么配置文件错了吗。在对 Qwen-7B-Chat 进行微调时并不会超出 GPU 内存限制。

Copy link

This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread.
此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant