You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding the LLaVAGenerate issue I just commented out those lines on chat.cc, since I am using LLaMA and not LLaVA, it should not matter.
That way I am able to "make chat -j". However, when running "./chat" it gets stucked showing "loading model ..." and the process ends showing "killed" on the screen. Unsure what the problem is, I am assuming is the "int4LlamaForCausalLM model" declaration on "chat.cc" as the program never shows the comment "Finshed!".
Arquitecture:
Jetson Nano Orin Developer Kit 8GB
Model: LLaMA2_7B_chat_awq_int4 for CUDA device
On Jetson orin nano 8G
when
make chat
it seems that there is no
src/nn_modules/cuda/LLaVAGenerate.cu
and by the way
src/ops/Gelu.cc needs
for tanhf, expf
git commit hash is
The text was updated successfully, but these errors were encountered: