-
Notifications
You must be signed in to change notification settings - Fork 5k
XAI support and using Hugging Face for embeddings and support multiple language #258
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @jeffreychuuu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the system's flexibility by integrating xAI as a new Large Language Model provider, allowing users to leverage Grok models for various agent functionalities. A key aspect of this integration is the introduction of a robust local embedding mechanism using Hugging Face's Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces XAI support and leverages Hugging Face for embeddings. It modifies .gitignore, cli/utils.py, requirements.txt, tradingagents/agents/utils/memory.py, and tradingagents/graph/trading_graph.py. The primary changes involve adding XAI as a provider option, incorporating Hugging Face's SentenceTransformer for local embeddings when using XAI, and updating dependencies.
tradingagents/agents/utils/memory.py
Outdated
| def __init__(self, name, config): | ||
| if config["backend_url"] == "http://localhost:11434/v1": | ||
| self.client = OpenAI(base_url=config["backend_url"], api_key=config.get("api_key")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's crucial to handle the absence of api_key gracefully. If api_key is not provided in the config, the code should either raise an exception or default to a secure alternative to prevent unexpected behavior or security vulnerabilities.
Consider raising a ValueError if the api_key is missing and required, or using a default key with a warning message.
| def __init__(self, name, config): | |
| if config["backend_url"] == "http://localhost:11434/v1": | |
| self.client = OpenAI(base_url=config["backend_url"], api_key=config.get("api_key")) | |
| if not config.get("api_key"): # Check if api_key exists | |
| raise ValueError("API key is required but not provided in the configuration.") | |
| self.client = OpenAI(base_url=config["backend_url"], api_key=config.get("api_key")) |
tradingagents/agents/utils/memory.py
Outdated
| # 根據 backend 決定 embedding 策略 | ||
| if config["llm_provider"] == "ollama": | ||
| self.embedding = "nomic-embed-text" | ||
| self.embedding_client = OpenAI(base_url="http://localhost:11434/v1") | ||
| self.use_local_embedding = False | ||
| elif config["llm_provider"] == "xai": | ||
| # Grok - 使用本地 Hugging Face 模型 | ||
| self.embedding_model = SentenceTransformer('all-MiniLM-L6-v2') | ||
| self.use_local_embedding = True | ||
| else: | ||
| self.embedding = "text-embedding-3-small" | ||
| self.client = OpenAI(base_url=config["backend_url"]) | ||
| self.embedding_client = self.client | ||
| self.use_local_embedding = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for determining the embedding strategy based on the llm_provider could be simplified and made more readable. Consider using a dictionary to map providers to their corresponding embedding configurations. This approach can reduce code duplication and improve maintainability.
Also, the use_local_embedding variable is set to False in the ollama case, but it seems like it should be True if you intend to use local embeddings for Ollama as well.
embedding_strategies = {
"ollama": {
"embedding": "nomic-embed-text",
"use_local_embedding": True, # Corrected to True
},
"xai": {
"embedding_model": SentenceTransformer('all-MiniLM-L6-v2'),
"use_local_embedding": True,
},
"default": {
"embedding": "text-embedding-3-small",
"use_local_embedding": False,
}
}
provider = config["llm_provider"].lower()
strategy = embedding_strategies.get(provider, embedding_strategies["default"])
if provider == "xai":
self.embedding_model = strategy["embedding_model"]
else:
self.embedding = strategy["embedding"]
self.embedding_client = OpenAI(base_url="http://localhost:11434/v1") if provider == "ollama" else self.client
self.use_local_embedding = strategy["use_local_embedding"]| if self.use_local_embedding: | ||
| # 使用本地 Hugging Face 模型 | ||
| return self.embedding_model.encode(text).tolist() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider adding a try-except block around the self.embedding_model.encode(text) call to handle potential exceptions during the encoding process. This will prevent the application from crashing if the encoding fails and allow for more graceful error handling.
| if self.use_local_embedding: | |
| # 使用本地 Hugging Face 模型 | |
| return self.embedding_model.encode(text).tolist() | |
| try: | |
| return self.embedding_model.encode(text).tolist() | |
| except Exception as e: | |
| print(f"Error encoding text: {e}") | |
| return None # Or some other appropriate error handling |
…Openai when using other LLM provider
This reverts commit 4a0b59d.
Uh oh!
There was an error while loading. Please reload this page.