-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to run llama.cpp on A770, missing .so file #12782
Comments
Hi @KczBen, You may install ollama through We will fix this issue today, and you may also try the latest version tmr. |
Thank you, that works! Now I have Ollama running, though I'm also encountering the same issue as #12761 with |
Could you pls provide some detailed logs? And in which round of conversation did the error begin to occur? |
Hi @KczBen , for deepseek-r1 garbage output issue, you may refer to my response in #12761 (comment). |
I always got the issue on the second response from r1:14b. Using today's ollama with IPEX-LLM and oneAPI 2025, I have gotten 5 coherent messages so far in the same chat without specifying the longer context. I'm using the same prompts I did when I was getting the garbage outputs before. I have oneAPI 2025 installed now as I was experimenting with upstream llama.cpp with SYCL, which requires 2025 to compile. Is installing two separate versions of oneAPI supported? If so, I could try installing 2024.0 and 2025.0 side by side and see if that's affecting the behaviour. I have 2025 installed through apt. |
Hi @KczBen, yes you can install two versions simultaneously. |
Hello,
I have a fresh install of Ubuntu Server 24.04 on my machine. After following the guides at
https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/install_linux_gpu.md
https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md
I cannot get llama.cpp running, I get an error for a missing .so file:
These are the steps I took to install the drivers:
Installing oneAPI:
With the above setup, I was able to run a modified version of the demo (it seems the demo listed on the page is broken?)
Installing llama.cpp:
Attempting to run the llama.cpp demo:
I'm somewhat confused as to what could've gone wrong. Did I miss a step, or do I have the wrong version of some package? I also get a similar missing .so file error when trying to run ollama.
The text was updated successfully, but these errors were encountered: