Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: can't load model that could previously be loaded in 0.5.13 #4781

Open
1 of 3 tasks
oleole39 opened this issue Mar 4, 2025 · 1 comment
Open
1 of 3 tasks

bug: can't load model that could previously be loaded in 0.5.13 #4781

oleole39 opened this issue Mar 4, 2025 · 1 comment
Labels
type: bug Something isn't working

Comments

@oleole39
Copy link

oleole39 commented Mar 4, 2025

Jan version

0.5.15

Describe the Bug

Hello,

The model displayed at the top of list in the Hub screen of Jan 0.5.15 (amd64 deb version) is Llama-3.2-1B-Instruct-Q8_0 shows 2 tags: "1.23GB" and "Not enough RAM".

If I click the "Use" button, the following error pops up in the Thread screen (where there is only 1 thread):

Thread created failed. To avoid piling up empty threads, please reuse previous one before creating new.

And nothing more happens.

That's weird since neither the model nor the machine have changed changed (well, apart from some regular system updates, but for instance GPU & Cuda drivers are still the same) since the following issue, where the model would run in the end: #4417

But one significant point is that the interface in 0.5.15 seems slightly different - in particular I can't find the threads screen's model's configuration menu in which I could fiddle with GPU layers and context value parameters to eventually get the model to load (as described in above mentioned related issue), because there is no model listed there anymore ("Select a model" shows an empty list). Like if there was a kind of new feature blocking the workaround I could use before - would there be such new feature as removing from the list of available models all those which are not aligned with "Jan developers-defined" local RAM needs?

I've tried importing the model again in Jan (choosing symlinking option) via the Hub screen, but it doesn't seem to change anything.

Steps to Reproduce

No response

Screenshots / Logs

When starting Jan, app.log shows the usual system description as well as the following lines:

2025-03-04T21:48:24.228Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx/v0.1.49"}
2025-03-04T21:48:24.229Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2/v0.1.49"}
2025-03-04T21:48:24.230Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2-cuda-11-7/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2-cuda-11-7/v0.1.49"}
2025-03-04T21:48:24.230Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2-cuda-12-0/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2-cuda-12-0/v0.1.49"}
2025-03-04T21:48:24.231Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx512/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx512/v0.1.49"}
2025-03-04T21:48:24.231Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx/v0.1.49"}
2025-03-04T21:48:24.232Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx-cuda-11-7/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx-cuda-11-7/v0.1.49"}
2025-03-04T21:48:24.232Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx-cuda-12-0/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx-cuda-12-0/v0.1.49"}
2025-03-04T21:48:24.233Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-vulkan/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-vulkan/v0.1.49"}

cortex.log is empty.

What is your OS?

  • MacOS
  • Windows
  • Linux
@oleole39 oleole39 added the type: bug Something isn't working label Mar 4, 2025
@github-project-automation github-project-automation bot moved this to Investigating in Menlo Mar 4, 2025
@oleole39 oleole39 changed the title bug: can't load model in Jan 0.5.14 and Jan 0.5.15 that could previously be loaded in 0.5.13 bug: can't load model that could previously be loaded in 0.5.13 Mar 4, 2025
@jeremyckahn
Copy link

Regarding these logs:

2025-03-04T21:48:24.228Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx/v0.1.49"}
2025-03-04T21:48:24.229Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2/v0.1.49"}
2025-03-04T21:48:24.230Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2-cuda-11-7/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2-cuda-11-7/v0.1.49"}
2025-03-04T21:48:24.230Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx2-cuda-12-0/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx2-cuda-12-0/v0.1.49"}
2025-03-04T21:48:24.231Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-avx512/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-avx512/v0.1.49"}
2025-03-04T21:48:24.231Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx/v0.1.49"}
2025-03-04T21:48:24.232Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx-cuda-11-7/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx-cuda-11-7/v0.1.49"}
2025-03-04T21:48:24.232Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-noavx-cuda-12-0/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-noavx-cuda-12-0/v0.1.49"}
2025-03-04T21:48:24.233Z [APP]::{"errno":-17,"code":"EEXIST","syscall":"symlink","path":"/opt/Jan/resources/app.asar.unpacked/shared/engines/cortex.llamacpp/linux-amd64-vulkan/v0.1.49","dest":"/home/username/jan/engines/cortex.llamacpp/linux-amd64-vulkan/v0.1.49"}

I am seeing similar error messages whenever I try to access a remote Ollama server via Jan. I am on Ubuntu 24.04. These errors prevent me from using any remote Ollama models. Local models still work, however.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Something isn't working
Projects
Status: Investigating
Development

No branches or pull requests

2 participants