-
-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FR Support local server for embeddings #645
Comments
I'd love this, the embedded WASM models don't seem to saturate the CPU / GPU so it takes ages... |
Makes sense. Thanks for the feature request 😊🌴 |
@daaain @ArtificialAmateur While this isn't an ideal solution, I did manage to set up What I've done is essentially eliminate the checks on This is a quick and dirty fix for those who'd rather handle the embeddings locally, and its far from ideal, but it works really great for my use case. Enjoy! InstructionsOn Before:
After:
On
On
|
@jagai thanks for sharing this 😊 PS- It will be easier to configure something like this without code in the future. 🌴 |
I have tried your code, however during the embedding process, LM studio shows the below error:
The |
I'll need a little bit more info on this if possible. Could you share which embedding model did you try, along with the version of Smart Connections? I'll do my best to help |
@jagai I am using smart-connections version 2.2.79 (although one literally just got pushed 2.2.80 but it doesn't affect our discussion). This is my model loaded in LM Studio: This is the obsidian settings: And the main.js was edited exactly as you documented. I changed the tokens to 2048 in the object and the JSON later, thought maybe it would help, but didn't. Here's a txt of the js: The embedding error is happening on certain files, but it's hard to figure out the issue as I have lots of files and I couldn't reach a point where I got 0 errors yet. EDIT: I have renamed the files, removed metadata, cleaned the texts removing all that break json (,./*? etc), still getting the same issue. So the issue is not due to the content of the notes. |
@usernotnull that's cool, thanks for sharing 🌴 |
Could you try switching on LM Studio to |
@jagai unfortunately same issue:
I also notice the issue with any local embedding model: I went ahead and tested it on a sandbox vault, same issue:
|
@usernotnull I couldn't reproduce the errors on my end. I'm not entirely sure its related to Obsidian or Smart Connections. Could be something to do with LM Studio, but I'm really not sure. |
Which OS are you on? |
I'm on macOS Sequoia, using Obsidian with my Macbook Air M1... Would be even more difficult for me to help as I've never tried running Obsidian or LM Studio on Windows to be honest |
@usernotnull A long shot, but since you're on Windows, perhaps giving the mixedbread-ai/mxbai-embed-large-v1 model a shot might yield better results? |
@usernotnull I've managed to narrow this down to LM Studio 0.3.3. For some reason it causes the models to fail embedding. Tested on LM Studio 0.3.2 and Smart Connections 2.2.81. You can find LM Studio 0.3.2 at the bottom of the download page (https://lmstudio.ai/download). Let me know if this works 😃 |
You did it 🏆 Any idea if LM Studio is aware of this issue? |
Glad it works! Woohoo 🥳 |
The LM Studio issue has been resolved. |
Would anyone who has this working in the latest SmartPlugins and LM studio care to share settings please? |
@davedawkins I haven't tested Lm studio in a while, it might need a custom adapter if it doesn't strictly follow OpenAI API format. I'd expect the API key could be left empty if you haven't configured any API key in lm studio. If it's not working, screenshot any errors that appear in the developer console logs and I'll give them a look 🌴 |
Thanks @brianpetro - to that end I have just logged an issue related to a "Model not set" error. My guess is that the OpenAI configuration is being unnecessarily validated (and failing) but I'm not sure. |
@davedawkins are you on the latest version? I thought I already fixed that issue in the last release. If it's still happening on the latest release, a screenshot of the error would be helpful 🌴 |
@davedawkins never mind, I just saw the other issue #931 🌴 |
Jumping off of #302
Like the local server options for Smart Chat, similar work can be done for embeddings.
The OpenAI format API (which LM Studio and Ollama support) is
/v1/embeddings
The text was updated successfully, but these errors were encountered: