You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is great and looks super powerful, but the cost of so many tokens at scale is going to be huge for heavy use. One great way to cut costs is to use local models + local search (I'll save search for another time) but how can I use my local ollama models with this?
The text was updated successfully, but these errors were encountered:
Hi! I have a simpler version below that uses Ollama. The main differences are that 1) it skips the planning phase and 2) it doesn't perform section writing in parallel. https://github.com/langchain-ai/ollama-deep-researcher
This is great and looks super powerful, but the cost of so many tokens at scale is going to be huge for heavy use. One great way to cut costs is to use local models + local search (I'll save search for another time) but how can I use my local ollama models with this?
The text was updated successfully, but these errors were encountered: