-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding mistral llm for text tasks #80
base: main
Are you sure you want to change the base?
Conversation
|
||
def _format_messages(self, messages: list): | ||
formatted_messages = [] | ||
# if messages[0]["role"] == RoleTypes.system: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove commented code.
# return formatted_tools | ||
|
||
def chat_completions( | ||
self, messages: list, tools: list = [], stop=None, response_format=None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add response_format
support.
config = MistralAIConfig() | ||
super().__init__(config=config) | ||
try: | ||
import mistralai |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add the mistralai
library to the list of dependencies in the requirements.txt file
Hi, this is a great value add for testing with Mistral. @sankn123 @ankit-v2-3 can you also test all the agents with latest Mistral model? Let's utilize this thread to document the experience with different agents with Mistral models. |
WalkthroughThe pull request introduces support for the Mistral AI model in the backend system. This involves adding a new LLM type to the constants, updating the default LLM selection logic, and creating a new implementation file for Mistral AI. The changes extend the existing LLM framework to include Mistral as a configurable and usable language model, with proper configuration, message formatting, and API interaction capabilities. Changes
Sequence DiagramsequenceDiagram
participant User
participant GetDefaultLLM
participant MistralAI
User->>GetDefaultLLM: Request default LLM
GetDefaultLLM->>GetDefaultLLM: Check environment variables
alt Mistral selected
GetDefaultLLM->>MistralAI: Initialize Mistral
MistralAI-->>GetDefaultLLM: Return Mistral instance
else Other LLM selected
GetDefaultLLM->>GetDefaultLLM: Select alternative LLM
end
GetDefaultLLM-->>User: Return configured LLM
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
backend/director/llm/mistral.py (2)
58-58
: Raise original exception with additional context
In the 'except' block, consider using 'raise e from None' or 'raise e from err' to help distinguish between errors triggered by the original exception and errors resulting from handling that exception.🧰 Tools
🪛 Ruff (0.8.2)
58-58: Within an
except
clause, raise exceptions withraise ... from err
orraise ... from None
to distinguish them from errors in exception handling(B904)
162-163
: Avoid using mutable default arguments
Using 'list' as a default argument can lead to unexpected behavior from mutable objects. Replace 'tools: list = []' with 'tools: Optional[list] = None' and initialize 'tools' within the function.🧰 Tools
🪛 Ruff (0.8.2)
162-162: Do not use mutable data structures for argument defaults
Replace with
None
; initialize within function(B006)
backend/director/llm/__init__.py (1)
16-16
: Use direct type conversion instead of ternary
Use 'mistral = bool(os.getenv("MISTRAL_API_KEY"))' instead of 'True if os.getenv("MISTRAL_API_KEY") else False' for conciseness and clarity.- mistral = True if os.getenv("MISTRAL_API_KEY") else False + mistral = bool(os.getenv("MISTRAL_API_KEY"))🧰 Tools
🪛 Ruff (0.8.2)
16-16: Use
bool(...)
instead ofTrue if ... else False
Replace with `bool(...)
(SIM210)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
backend/director/constants.py
(2 hunks)backend/director/llm/__init__.py
(2 hunks)backend/director/llm/mistral.py
(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
backend/director/llm/__init__.py
16-16: Use bool(...)
instead of True if ... else False
Replace with `bool(...)
(SIM210)
backend/director/llm/mistral.py
58-58: Within an except
clause, raise exceptions with raise ... from err
or raise ... from None
to distinguish them from errors in exception handling
(B904)
162-162: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
🔇 Additional comments (7)
backend/director/llm/mistral.py (3)
56-56
: Add the 'mistralai' dependency into requirements.txt
This comment has already been raised, but it remains relevant for this commit. Please ensure that the 'mistralai' library is listed in requirements.txt so it can be installed as a project dependency.
65-67
: Remove or clarify commented-out code
As previously requested, please remove or provide justification for the commented-out lines. Keeping removed logic in comments makes the code less maintainable.
162-162
: Implement 'response_format' usage
A prior review comment requested adding 'response_format' support. Currently, the parameter is declared but not used. If you intend to return the response in different formats, be sure to incorporate 'response_format' logic.
🧰 Tools
🪛 Ruff (0.8.2)
162-162: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
backend/director/constants.py (2)
22-22
: Enum addition looks good
The new 'MISTRAL' member for 'LLMType' aligns with the Mistral integration.
31-31
: Enum addition looks good
The new 'MISTRAL_' environment prefix is consistent with convention and matches the usage in the config class.
backend/director/llm/__init__.py (2)
24-25
: Mistral LLM integration
This conditional branching correctly returns an instance of MistralAI. Ensure coverage in your unit tests to confirm the default environment logic.
7-7
: Validate that the 'mistralai' import does not cause circular dependencies
While importing 'MistralAI' works here, ensure that importing this module doesn’t create circular dependencies in other parts of the codebase.
✅ Verification successful
Let me analyze the imports further to ensure there are no circular dependencies.
No circular dependencies detected with the MistralAI import
The import structure is clean and hierarchical:
backend/director/llm/mistral.py
only imports from base modules and constantsbackend/director/llm/__init__.py
imports the concrete implementationMistralAI
from the mistral module- No circular references are found between these modules
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Attempt to find import loops that might involve 'mistralAI'
# We'll look for import statements referencing 'mistral' or 'mistralai'
ast-grep --pattern $'import mistral$_'
rg -A 5 mistral
Length of output: 2432
Script:
#!/bin/bash
# Check for any imports of director.llm or director.llm.* in mistral.py
rg -l "from director\.llm" backend/director/llm/mistral.py
# Check the content of mistral.py to see all its imports
cat backend/director/llm/mistral.py
Length of output: 7118
This pull request introduces support for integrating an open-source free LLM by Mistral AI into the project. The integration ensures that users can leverage the capabilities of the director without relying on paid API keys from providers like OpenAI, Anthropic, or other commercial models.
TL;DR:
Added support for Mistral AI LLM.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes