Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cant select model in Obsidian Plugin drop down #53

Open
danraymond opened this issue Feb 16, 2024 · 18 comments
Open

cant select model in Obsidian Plugin drop down #53

danraymond opened this issue Feb 16, 2024 · 18 comments

Comments

@danraymond
Copy link

I am running a windows instance of ollama it works with other ollama obsidian plugins. With BMO though it won't show me any models to select in the dropdown list, it's empty. The interface works in obsidian in the right pane but its not responding at all.

I am assuming its because I can't select a model? What can I do?

@keriati
Copy link
Contributor

keriati commented Feb 16, 2024

@danraymond at the moment ollama is missing support for model listing (at least when using the openAI compatible connection).
A pull request is already open for it here: ollama/ollama#2476
If the maintainers are accepting this PR, ollama will start to work with the BMO plugin also with the normal API connection.
Also this fix might be relevant then for BMO: #51

@danraymond
Copy link
Author

Is there a way to get int to work in obsidian until a next update?

@twalderman
Copy link

I had a similar problem and I removes the plugin folder after I uninstalled and then reinstalled. It worked after. Also I found that is I had filled in a REST API entry (not oai) it would make other model disappear.

@longy2k
Copy link
Owner

longy2k commented Feb 26, 2024

v1.8.8

@danraymond - Let me know if keriati PR resolve this issue.

Thanks

@danraymond
Copy link
Author

The latest update responds with a "model not found" in the chat box. However, I still can';t set the model in the plugin parameters is that the issue. How then to set the model. Ollama happily running for other plugins

@longy2k
Copy link
Owner

longy2k commented Feb 26, 2024

@danraymond

Hmm, I'm not sure if fetching via the window instance of Ollama is different from the Linux/MacOS version.

I will setup Ollama on a windows machine to see if I can replicate the same issue you are facing.

@danraymond
Copy link
Author

Hi longy2k,
How have you progressed please. The Iatest update of the plugin I still can't load a model when ollama windows instance

@longy2k
Copy link
Owner

longy2k commented Mar 11, 2024

Hi @danraymond,

I just did a fresh install for Obsidian/Ollama on Windows 11 and was able to get it working.

There are two potential issues that I can think of:

The data.json does not have the right data structure which may cause an error. In this case, try creating a new Obsidian vault, download BMO Chatbot, and insert http://localhost:11434 to OLLAMA REST API URL after running the Ollama server. If BMO Chatbot works with the new vault, I would recommend deleting data.json from your original vault and restart the plugin.

  1. Go to Community Settings and click on the folder's icon:

    Screenshot 2024-03-10 at 9 28 38 PM
  2. Close Obsidian completely.

  3. Find the bmo-chatbot folder and delete data.json.

  4. Restart Obsidian.

The second potential issue is that Ollama server is not setup properly.

I know you mentioned that your Ollama server is working with other plugins but these are the only two issues I can think of at the moment. Please let me know what you have tried and maybe we could find other ways to resolve this :)

@longy2k
Copy link
Owner

longy2k commented May 23, 2024

v2.1.0

I also ran into the same issue of not being able to select a model in the dropdown at times.

I have added a reload button next to the dropdown under in 'General > Model', let me know if this resolves your issue.

Thanks

@Rvnd0m
Copy link

Rvnd0m commented May 26, 2024

v2.1.0

I also ran into the same issue of not being able to select a model in the dropdown at times.

I have added a reload button next to the dropdown under in 'General > Model', let me know if this resolves your issue.

Thanks

Hi Longy2k,
First of all, thanks for your time.
I've pushed the reload button to try to show the models (currently I run Mistral, Llama3 and Llama3:70b on my Ollama), but nothing changes... there is not any model in the dropdown list...

@longy2k
Copy link
Owner

longy2k commented May 26, 2024

@Rvnd0m

Please update to v2.1.1 and make sure your Ollama server handles the CORS restriction. You can follow these instructions here: https://github.com/longy2k/obsidian-bmo-chatbot/wiki/How-to-setup-with-Ollama

Let me know if you got it working, thanks!

@Rvnd0m
Copy link

Rvnd0m commented May 26, 2024

@Rvnd0m

Please update to v2.1.1 and make sure your Ollama server handles the CORS restriction. You can follow these instructions here: https://github.com/longy2k/obsidian-bmo-chatbot/wiki/How-to-setup-with-Ollama

Let me know if you got it working, thanks!

Hi! I am running Ollama on PopOS (Ubuntu), and I read that your tutorial is for MacOS CORS. Is it the same for Linux? (I wouldn't want to break something...)

Regards,

@longy2k
Copy link
Owner

longy2k commented May 26, 2024

@Rvnd0m

Option 2 might be the easiest:

  1. Open terminal and run OLLAMA_ORIGINS="app://obsidian.md*" OLLAMA_HOST="127.0.0.1:11435" ollama serve. This will create a new server that will bypass the CORS restriction.
  2. Go to 'BMO Chatbot Settings > Ollama Connection > OLLAMA REST API URL' and insert Ollama's url: http://localhost:11435.
  3. Press the reload button next to the model dropdown in "General > Model" if necessary to reload all known models.

If you run into port conflicts (e.g Address already in use):

  1. You can open terminal and run sudo lsof -i :PORT_NUMBER where :PORT_NUMBER can be :11435 or any other port that is causing port conflict.
  2. Read the table and locate the header "PID".
  3. Run sudo kill -9 PID where PID is all the numbers under the heading.
  4. Then repeat the steps in option 2. (If all else fails, just restart your computer and do option 2 again).

@Rvnd0m
Copy link

Rvnd0m commented May 27, 2024

Hi longy2k.
I did the Option 2 and the PID kills, but it still doesn't show any model, despite the msg "models reloadad" appears on the corner (but the dropdown is empty). I also rebooted the machine btw.

@longy2k
Copy link
Owner

longy2k commented May 27, 2024

For option 2, is the server successfully running?

Also check if you are on the latest Ollama version (v0.1.38).

If you are on the latest Ollama version and you are running the Ollama server successfully and no models are loaded, try uninstalling and re-installing the plug-in.

@Rvnd0m
Copy link

Rvnd0m commented May 29, 2024

Hi Longy2k.

The server is running properly, as I can chat with LLMs via Open WebUI.

My version vas .37, but I've just updated it to the latest (.39)

I have also done the uninstall plugin + install, but still doesn't work...

I'm sorry for the time you are spending on that... i don't know why-df it doesn't dropsdown the models despite the "Models reloaded" msg...

Regards,

@Philosophist
Copy link

Philosophist commented Jun 2, 2024

I'm having the same problem. Using WSL version of stable diffusion and Ollama.

@GitHubGenericUserName588

I'm having what looks to be the same issue. Inserted the CORS line on the remote (in-network) server where Ollama is running. I know (or think I know) that the server is accessible since I can reach Open WebUI from my Windows machine. Nevertheless I find that when I plug in the Ollama REST API (http://ollama.server.IP.address:11434) I get no loaded models. This is regardless of whether Ollama has an AI loaded at that moment or not.

Also followed the troubleshooting instructions here and here, no dice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants