diff --git a/docs/cody/capabilities/chat.mdx b/docs/cody/capabilities/chat.mdx
index c07d6ce65..0e3c60d3c 100644
--- a/docs/cody/capabilities/chat.mdx
+++ b/docs/cody/capabilities/chat.mdx
@@ -41,6 +41,42 @@ When you have both a repository and files @-mentioned, Cody will search the repo
You can add new custom context by adding `@-mention` context chips to the chat. At any point, you can use `@-mention` a repository, file, line range, or symbol, to ask questions about your codebase. Cody will use this new context to generate contextually relevant code.
+## Image upload
+
+Cody supports uploading images to chat when using models with vision capabilities. This allows you to get help with visual content like screenshots, diagrams, UI mockups, or error messages.
+
+### Models with vision support
+
+To use image upload, you need to select a model that supports vision capabilities. Models with vision support include:
+
+- Claude 3.7 Sonnet
+- Claude Sonnet 4
+- Claude Opus 4 and 4.1
+- GPT-4o and GPT-4o-mini
+
+While Gemini models support vision capabilities, Cody clients do not currently support image uploads to Gemini models.
+
+See the full list of [supported models](/cody/capabilities/supported-models) and their capabilities.
+
+### Using image upload
+
+You can upload images to Cody chat by:
+
+- Dragging and dropping an image file into the chat input (in VS Code, you may need to hold the Shift key while dragging)
+- Using the attach button to select an image from your file system
+- Pasting an image from your clipboard
+
+Once uploaded, you can ask Cody questions about the image, such as:
+
+- "What's wrong with this error message?"
+- "Explain what this architecture diagram shows"
+- "Help me recreate this UI component"
+- "What does this code screenshot do?"
+
+### Availability
+
+Image upload support varies by client. Check the [feature parity reference](/cody/clients/feature-reference#chat) to see which clients support image upload.
+
## LLM selection
Cody allows you to select the LLM you want to use for your chat, which is optimized for speed versus accuracy. Enterprise users with the new [model configuration](/cody/clients/model-configuration) can use the LLM selection dropdown to choose a chat model.
diff --git a/docs/cody/capabilities/supported-models.mdx b/docs/cody/capabilities/supported-models.mdx
index 043584f9d..d4a7ede4e 100644
--- a/docs/cody/capabilities/supported-models.mdx
+++ b/docs/cody/capabilities/supported-models.mdx
@@ -6,36 +6,40 @@ Cody supports a variety of cutting-edge large language models for use in chat an
Newer versions of Sourcegraph Enterprise, starting from v5.6, it will be even easier to add support for new models and providers, see [Model Configuration](/cody/enterprise/model-configuration) for more information.
-| **Provider** | **Model** | **Status** |
-| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- |
-| OpenAI | [GPT-5](https://platform.openai.com/docs/models/gpt-5) | ✅ |
-| OpenAI | [GPT-5-mini](https://platform.openai.com/docs/models/gpt-5-mini) | ✅ |
-| OpenAI | [GPT-5-nano](https://platform.openai.com/docs/models/gpt-5-nano) | ✅ |
-| OpenAI | [GPT-4-Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | ✅ |
-| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | ✅ |
-| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ |
-| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) |
-| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) |
-| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | ✅ |
-| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ |
-| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | ✅ |
-| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4o-mini) | ✅ |
-| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ |
-| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ |
-| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ |
-| Anthropic | [Claude Sonnet 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
-| Anthropic | [Claude Sonnet 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
-| Anthropic | [Claude Opus 4.1](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
-| Anthropic | [Claude Opus 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
-| Anthropic | [Claude Opus 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
-| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ (beta) |
-| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ |
-| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ |
-| Google | [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | ✅ |
-| Google | [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) | ✅ |
+| **Provider** | **Model** | **Status** | **Vision Support** |
+| :----------- | :-------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- | :----------------- |
+| OpenAI | [GPT-5](https://platform.openai.com/docs/models/gpt-5) | ✅ | ✅ |
+| OpenAI | [GPT-5-mini](https://platform.openai.com/docs/models/gpt-5-mini) | ✅ | ✅ |
+| OpenAI | [GPT-5-nano](https://platform.openai.com/docs/models/gpt-5-nano) | ✅ | ✅ |
+| OpenAI | [GPT-4-Turbo](https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo#:~:text=TRAINING%20DATA-,gpt%2D4%2D0125%2Dpreview,-New%20GPT%2D4) | ✅ | ❌ |
+| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | ✅ | ✅ |
+| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ |
+| OpenAI | [o3-mini-medium](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) | ❌ |
+| OpenAI | [o3-mini-high](https://openai.com/index/openai-o3-mini/) | ✅ (experimental) | ❌ |
+| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | ✅ | ❌ |
+| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ | ❌ |
+| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | ✅ | ✅ |
+| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4o-mini) | ✅ | ✅ |
+| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ | ✅ |
+| Anthropic | [Claude 3.5 Haiku](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ❌ |
+| Anthropic | [Claude 3.7 Sonnet](https://docs.anthropic.com/claude/docs/models-overview#model-comparison) | ✅ | ✅ |
+| Anthropic | [Claude Sonnet 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
+| Anthropic | [Claude Sonnet 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
+| Anthropic | [Claude Opus 4.1](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
+| Anthropic | [Claude Opus 4](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
+| Anthropic | [Claude Opus 4 w/ Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
+| Google | [Gemini 1.5 Pro](https://deepmind.google/technologies/gemini/pro/) | ✅ (beta) | ✅* |
+| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅* |
+| Google | [Gemini 2.0 Flash](https://deepmind.google/technologies/gemini/flash/) | ✅ | ✅* |
+| Google | [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | ✅ | ✅* |
+| Google | [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) | ✅ | ✅* |
+
+* While Gemini models support vision capabilities, Cody clients do not currently support image uploads to Gemini models.
To use Claude 3 Sonnet models with Cody Enterprise, make sure you've upgraded your Sourcegraph instance to the latest version.
+Site admins can configure vision support using the [`chatVision` setting](/admin/config/site_config) in site configuration and by adding the `vision` capability to model configurations. See [Model Configuration](/cody/enterprise/model-configuration) for more details.
+
### Claude 3.7 and 4 Sonnet
Claude 3.7 and 4 Sonnet have two variants; the base version, and the **extended thinking** version which supports deep reasoning and fast, responsive edit workflows. Cody enables using both, and lets the user select which to use in the model dropdown selector, so the user can choose whether to use extended thinkig depending on their work task.
diff --git a/docs/cody/clients/feature-reference.mdx b/docs/cody/clients/feature-reference.mdx
index a513b66c7..09a7be8c6 100644
--- a/docs/cody/clients/feature-reference.mdx
+++ b/docs/cody/clients/feature-reference.mdx
@@ -15,6 +15,7 @@
| @-file | ✅ | ✅ | ✅ | ✅ | ❌ |
| @-symbol | ✅ | ❌ | ✅ | ✅ | ❌ |
| @-directories | ✅ | ✅ | ✅ | ✅ | ❌ |
+| Image upload | ✅ | ✅ | ❌ | ✅ | ❌ |
| LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ |
| Admin LLM Selection | ✅ | ✅ | ✅ | ✅ | ❌ |
| Agentic Context Fetching | ✅ | ✅ | ✅ | ✅ | ✅ |