Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problem with - Loading model... Killed #84

Open
ecliipt opened this issue Dec 26, 2023 · 0 comments
Open

problem with - Loading model... Killed #84

ecliipt opened this issue Dec 26, 2023 · 0 comments

Comments

@ecliipt
Copy link

ecliipt commented Dec 26, 2023

so i've been trying to test tinychatengine on my win10 laptop with 7gb of ram and an i3 (11gen) with windows's latest version of wsl debian, and i'm not sure if this is an error or just lack of specs, but here's what i've encountered:

  • every time i try to chat with LLaMA2_7B_chat_awq_int4 --QM QM_x86 (i followed the tutorial in the readme):
(venv) user@LAPTOP:~/TinyChatEngine/llm$ ./chat
TinyChatEngine by MIT HAN Lab: https://github.com/mit-han-lab/TinyChatEngine
Using model: LLaMA2_7B_chat
Using AWQ for 4bit quantization: https://github.com/mit-han-lab/llm-awq
Loading model... Killed
  • when i try to use opt models like the 125m or the 1.3B (fp32):
(venv) user@LAPTOP:~/TinyChatEngine/llm$ ./chat OPT_125m
TinyChatEngine by MIT HAN Lab: https://github.com/mit-han-lab/TinyChatEngine
Using model: OPT_125m
Loading model... No such file or directory: INT4/models/OPT_125m/decoder/embed_tokens/weight.bin
terminate called after throwing an instance of 'char const*'
Aborted

i'm not really going to use the opt models, but i tought it would be good to note.
is there anything i can do to "fix" this or is this having to do with the laptop's specs? thanks & happy holidays

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant