Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions docs/cody/capabilities/chat.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,42 @@ When you have both a repository and files @-mentioned, Cody will search the repo

You can add new custom context by adding `@-mention` context chips to the chat. At any point, you can use `@-mention` a repository, file, line range, or symbol, to ask questions about your codebase. Cody will use this new context to generate contextually relevant code.

## Image upload

Cody supports uploading images to chat when using models with vision capabilities. This allows you to get help with visual content like screenshots, diagrams, UI mockups, or error messages.

### Models with vision support

To use image upload, you need to select a model that supports vision capabilities. Models with vision support include:

- Claude 3.7 Sonnet
- Claude Sonnet 4
- Claude Opus 4 and 4.1
- GPT-4o and GPT-4o-mini

<Callout type="note">While Gemini models support vision capabilities, Cody clients do not currently support image uploads to Gemini models.</Callout>

See the full list of [supported models](/cody/capabilities/supported-models) and their capabilities.

### Using image upload

You can upload images to Cody chat by:

- Dragging and dropping an image file into the chat input (in VS Code, you may need to hold the Shift key while dragging)
- Using the attach button to select an image from your file system
- Pasting an image from your clipboard

Once uploaded, you can ask Cody questions about the image, such as:

- "What's wrong with this error message?"
- "Explain what this architecture diagram shows"
- "Help me recreate this UI component"
- "What does this code screenshot do?"

### Availability

Image upload support varies by client. Check the [feature parity reference](/cody/clients/feature-reference#chat) to see which clients support image upload.

## LLM selection

Cody allows you to select the LLM you want to use for your chat, which is optimized for speed versus accuracy. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.
Expand Down
58 changes: 31 additions & 27 deletions docs/cody/capabilities/supported-models.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,36 +6,40 @@ Cody supports a variety of cutting-edge large language models for use in chat an

<Callout type="note">Newer versions of Sourcegraph Enterprise, starting from v5.6, it will be even easier to add support for new models and providers, see [Model Configuration](/cody/enterprise/model-configuration) for more information.</Callout>

| **Provider** | **Model** | **Status** |
| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- |
| OpenAI | [GPT-5](https://platform.openai.com/docs/models/gpt-5) | ✅ |
| OpenAI | [GPT-5-mini](https://platform.openai.com/docs/models/gpt-5-mini) | ✅ |
| OpenAI | [GPT-5-nano](https://platform.openai.com/docs/models/gpt-5-nano) | ✅ |
| OpenAI | [GPT-4-Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | ✅ |
| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | ✅ |
| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ |
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) |
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) |
| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | ✅ |
| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ |
| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | ✅ |
| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4o-mini) | ✅ |
| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ |
| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ |
| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ |
| Anthropic | [Claude Sonnet 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Anthropic | [Claude Sonnet 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Anthropic | [Claude Opus 4.1](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Anthropic | [Claude Opus 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Anthropic | [Claude Opus 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ (beta) |
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ |
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ |
| Google | [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | ✅ |
| Google | [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) | ✅ |
| **Provider** | **Model** | **Status** | **Vision Support** |
| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- | :----------------- |
| OpenAI | [GPT-5](https://platform.openai.com/docs/models/gpt-5) | ✅ | ✅ |
| OpenAI | [GPT-5-mini](https://platform.openai.com/docs/models/gpt-5-mini) | ✅ | ✅ |
| OpenAI | [GPT-5-nano](https://platform.openai.com/docs/models/gpt-5-nano) | ✅ | ✅ |
| OpenAI | [GPT-4-Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | ✅ | ❌ |
| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | ✅ | ✅ |
| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ |
| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) | ❌ |
| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) | ❌ |
| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | ✅ | ❌ |
| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ | ❌ |
| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | ✅ | ✅ |
| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4o-mini) | ✅ | ✅ |
| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ | ✅ |
| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ❌ |
| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Opus 4.1](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Opus 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Opus 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ (beta) | ✅* |
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅* |
| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅* |
| Google | [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | ✅ | ✅* |
| Google | [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) | ✅ | ✅* |

<Callout type="note">* While Gemini models support vision capabilities, Cody clients do not currently support image uploads to Gemini models.</Callout>

<Callout type="note">To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version. </Callout>

<Callout type="note">Site admins can configure vision support using the [`chatVision` setting](/admin/config/site_config) in site configuration and by adding the `vision` capability to model configurations. See [Model Configuration](/cody/enterprise/model-configuration) for more details.</Callout>

### Claude 3.7 and 4 Sonnet

Claude 3.7 and 4 Sonnet have two variants; the base version, and the **extended thinking** version which supports deep reasoning and fast, responsive edit workflows. Cody enables using both, and lets the user select which to use in the model dropdown selector, so the user can choose whether to use extended thinkig depending on their work task.
Expand Down
1 change: 1 addition & 0 deletions docs/cody/clients/feature-reference.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
| @-file | ✅ | ✅ | ✅ | ✅ | ❌ |
| @-symbol | ✅ | ❌ | ✅ | ✅ | ❌ |
| @-directories | ✅ | ✅ | ✅ | ✅ | ❌ |
| Image upload | ✅ | ✅ | ❌ | ✅ | ❌ |
| LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ |
| Admin LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ |
| Agentic Context Fetching | ✅ | ✅ | ✅ | ✅ | ✅ |
Expand Down