Skip to content

Commit

Permalink
Fix typo model_weak
Browse files Browse the repository at this point in the history
  • Loading branch information
KennyDizi committed Dec 8, 2024
1 parent 88a93bd commit e3d779c
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 14 deletions.
22 changes: 11 additions & 11 deletions docs/docs/usage-guide/changing_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ To use a different model than the default (GPT-4), you need to edit in the [conf
```
[config]
model = "..."
model_week = "..."
model_weak = "..."
fallback_models = ["..."]
```

Expand All @@ -28,7 +28,7 @@ and set in your configuration file:
```
[config]
model="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
model_week="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
model_weak="" # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
fallback_models=["..."] # the OpenAI model you've deployed on Azure (e.g. gpt-3.5-turbo)
```

Expand All @@ -52,7 +52,7 @@ MAX_TOKENS={
[config] # in configuration.toml
model = "ollama/llama2"
model_week = "ollama/llama2"
model_weak = "ollama/llama2"
fallback_models=["ollama/llama2"]
[ollama] # in .secrets.toml
Expand All @@ -76,7 +76,7 @@ MAX_TOKENS={
}
[config] # in configuration.toml
model = "huggingface/meta-llama/Llama-2-7b-chat-hf"
model_week = "huggingface/meta-llama/Llama-2-7b-chat-hf"
model_weak = "huggingface/meta-llama/Llama-2-7b-chat-hf"
fallback_models=["huggingface/meta-llama/Llama-2-7b-chat-hf"]
[huggingface] # in .secrets.toml
Expand All @@ -91,7 +91,7 @@ To use Llama2 model with Replicate, for example, set:
```
[config] # in configuration.toml
model = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
model_week = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
model_weak = "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"
fallback_models=["replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1"]
[replicate] # in .secrets.toml
key = ...
Expand All @@ -107,7 +107,7 @@ To use Llama3 model with Groq, for example, set:
```
[config] # in configuration.toml
model = "llama3-70b-8192"
model_week = "llama3-70b-8192"
model_weak = "llama3-70b-8192"
fallback_models = ["groq/llama3-70b-8192"]
[groq] # in .secrets.toml
key = ... # your Groq api key
Expand All @@ -121,7 +121,7 @@ To use Google's Vertex AI platform and its associated models (chat-bison/codecha
```
[config] # in configuration.toml
model = "vertex_ai/codechat-bison"
model_week = "vertex_ai/codechat-bison"
model_weak = "vertex_ai/codechat-bison"
fallback_models="vertex_ai/codechat-bison"
[vertexai] # in .secrets.toml
Expand All @@ -140,7 +140,7 @@ To use [Google AI Studio](https://aistudio.google.com/) models, set the relevant
```toml
[config] # in configuration.toml
model="google_ai_studio/gemini-1.5-flash"
model_week="google_ai_studio/gemini-1.5-flash"
model_weak="google_ai_studio/gemini-1.5-flash"
fallback_models=["google_ai_studio/gemini-1.5-flash"]

[google_ai_studio] # in .secrets.toml
Expand All @@ -156,7 +156,7 @@ To use Anthropic models, set the relevant models in the configuration section of
```
[config]
model="anthropic/claude-3-opus-20240229"
model_week="anthropic/claude-3-opus-20240229"
model_weak="anthropic/claude-3-opus-20240229"
fallback_models=["anthropic/claude-3-opus-20240229"]
```

Expand All @@ -173,7 +173,7 @@ To use Amazon Bedrock and its foundational models, add the below configuration:
```
[config] # in configuration.toml
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model_week="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
model_weak="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
fallback_models=["bedrock/anthropic.claude-v2:1"]
```

Expand All @@ -195,7 +195,7 @@ If the relevant model doesn't appear [here](https://github.com/Codium-ai/pr-agen
```
[config]
model="custom_model_name"
model_week="custom_model_name"
model_weak="custom_model_name"
fallback_models=["custom_model_name"]
```
(2) Set the maximal tokens for the model:
Expand Down
2 changes: 1 addition & 1 deletion pr_agent/algo/pr_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -355,7 +355,7 @@ async def retry_with_fallback_models(f: Callable, model_type: ModelType = ModelT

def _get_all_models(model_type: ModelType = ModelType.WEAK) -> List[str]:
if model_type == ModelType.WEAK:
model = get_settings().config.model_week
model = get_settings().config.model_weak
else:
model = get_settings().config.model
fallback_models = get_settings().config.fallback_models
Expand Down
2 changes: 1 addition & 1 deletion pr_agent/git_providers/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,5 +99,5 @@ def set_claude_model():
"""
model_claude = "bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0"
get_settings().set('config.model', model_claude)
get_settings().set('config.model_week', model_claude)
get_settings().set('config.model_weak', model_claude)
get_settings().set('config.fallback_models', [model_claude])
2 changes: 1 addition & 1 deletion pr_agent/settings/configuration.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[config]
# models
model_week="gpt-4o-mini-2024-07-18"
model_weak="gpt-4o-mini-2024-07-18"
model="gpt-4o-2024-11-20"
fallback_models=["gpt-4o-2024-08-06"]
# CLI
Expand Down

0 comments on commit e3d779c

Please sign in to comment.