feat(LLM/providers): Adding oLlama as LLM Provider #968
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request introduces several significant changes to support multiple LLM providers, specifically OpenAI and Ollama, and enhances the error handling and schema normalization processes. The most important changes include the addition of new environment variables, updating function names and imports to be more generic, and implementing specific embedding and completion functions for Ollama.
Support for multiple LLM providers:
apps/api/.env.example
: Added new environment variablesLLM_PROVIDER
,OLLAMA_URL
,MODEL_NAME
, andOLLAMA_EMBEDDING_MODEL
to configure the LLM provider and model.apps/api/src/lib/ranker.ts
: Introduced theLLMProvider
type and functionsgetOpenAIEmbedding
andgetOllamaEmbedding
to handle embeddings for different LLM providers. UpdatedgetEmbedding
to switch between providers based on the environment variable. [1] [2]Refactoring and renaming for LLM provider generalization:
apps/api/src/controllers/v1/extract.ts
: ReplacedgenerateOpenAICompletions
withgenerateLLMCompletions
and updated relevant function calls to reflect the new name. [1] [2]apps/api/src/scraper/scrapeURL/transformers/llmExtract.ts
: RenamedgenerateOpenAICompletions
togenerateLLMCompletions
and addedgenerateOllamaCompletion
to handle completions for Ollama. Updated the completion process to switch between LLM providers. [1] [2]Enhanced error handling and schema normalization:
apps/api/src/scraper/scrapeURL/transformers/llmExtract.ts
: Improved schema normalization to handle null and undefined values more robustly. Added detailed error handling for LLM provider responses and parsing.These changes collectively enhance the flexibility of the application to support multiple LLM providers and improve the robustness of the embedding and completion processes.
Example using Ollama:
Output:
Warning⚠️
I just confirmed that Ollama is functioning correctly on my local setup.