-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: can't load model that could previously be loaded in 0.5.13 #4781
Labels
type: bug
Something isn't working
Comments
Regarding these logs:
I am seeing similar error messages whenever I try to access a remote Ollama server via Jan. I am on Ubuntu 24.04. These errors prevent me from using any remote Ollama models. Local models still work, however. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Jan version
0.5.15
Describe the Bug
Hello,
The model displayed at the top of list in the Hub screen of Jan 0.5.15 (amd64 deb version) is Llama-3.2-1B-Instruct-Q8_0 shows 2 tags: "1.23GB" and "Not enough RAM".
If I click the "Use" button, the following error pops up in the Thread screen (where there is only 1 thread):
And nothing more happens.
That's weird since neither the model nor the machine have changed changed (well, apart from some regular system updates, but for instance GPU & Cuda drivers are still the same) since the following issue, where the model would run in the end: #4417
But one significant point is that the interface in 0.5.15 seems slightly different - in particular I can't find the threads screen's model's configuration menu in which I could fiddle with GPU layers and context value parameters to eventually get the model to load (as described in above mentioned related issue), because there is no model listed there anymore ("Select a model" shows an empty list). Like if there was a kind of new feature blocking the workaround I could use before - would there be such new feature as removing from the list of available models all those which are not aligned with "Jan developers-defined" local RAM needs?
I've tried importing the model again in Jan (choosing symlinking option) via the Hub screen, but it doesn't seem to change anything.
Steps to Reproduce
No response
Screenshots / Logs
When starting Jan,
app.log
shows the usual system description as well as the following lines:cortex.log
is empty.What is your OS?
The text was updated successfully, but these errors were encountered: