Skip to content

Commit 6570a05

Browse files
pandora-s-gitgithub-actions[bot]
authored andcommitted
chore(docs): auto-update llms.txt & llms-full.txt
1 parent 616a568 commit 6570a05

File tree

2 files changed

+101
-176
lines changed

2 files changed

+101
-176
lines changed

static/llms-full.txt

Lines changed: 52 additions & 127 deletions
Original file line numberDiff line numberDiff line change
@@ -3117,7 +3117,7 @@ Now that we have our document library agent ready, we can search them on demand
31173117

31183118
```py
31193119
response = client.beta.conversations.start(
3120-
agent_id=image_agent.id,
3120+
agent_id=library_agent.id,
31213121
inputs="How does the vision encoder for pixtral 12b work"
31223122
)
31233123
```
@@ -11711,11 +11711,11 @@ Currently we have two reasoning models:
1171111711
- `magistral-medium-latest`: Our more powerful reasoning model balancing performance and cost.
1171211712

1171311713
:::info
11714-
Currently, `-latest` points to `-2507`, our most recent version of our reasoning models. If you were previously using `-2506`, a **migration** regarding the thinking chunks is required.
11715-
- `-2507` **(new)**: Uses tokenized thinking chunks via control tokens, providing the thinking traces in different types of content chunks.
11714+
Currently, `-latest` points to `-2509`, our most recent version of our reasoning models. If you were previously using `-2506`, a **migration** regarding the thinking chunks is required.
11715+
- `-2507` & `-2509` **(new)**: Uses tokenized thinking chunks via control tokens, providing the thinking traces in different types of content chunks.
1171611716
- `-2506` **(old)**: Used `<think>\n` and `\n</think>\n` tags as strings to encapsulate the thinking traces for input and output within the same content type.
1171711717
<Tabs groupId="version">
11718-
<TabItem value="2507" label="2507 (new)" default>
11718+
<TabItem value="2509" label="2507/2509 (new)" default>
1171911719
```json
1172011720
[
1172111721
{
@@ -11794,7 +11794,33 @@ To have the best performance out of our models, we recommend having the followin
1179411794
<summary><b>System Prompt</b></summary>
1179511795

1179611796
<Tabs groupId="version">
11797-
<TabItem value="2507" label="2507 (new)" default>
11797+
<TabItem value="2509" label="2509 (new)" default>
11798+
```json
11799+
{
11800+
"role": "system",
11801+
"content": [
11802+
{
11803+
"type": "text",
11804+
"text": "# HOW YOU SHOULD THINK AND ANSWER\n\nFirst draft your thinking process (inner monologue) until you arrive at a response. Format your response using Markdown, and use LaTeX for any mathematical equations. Write both your thoughts and the response in the same language as the input.\n\nYour thinking process must follow the template below:"
11805+
},
11806+
{
11807+
"type": "thinking",
11808+
"thinking": [
11809+
{
11810+
"type": "text",
11811+
"text": "Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate the response to the user."
11812+
}
11813+
]
11814+
},
11815+
{
11816+
"type": "text",
11817+
"text": "Here, provide a self-contained response."
11818+
}
11819+
]
11820+
}
11821+
```
11822+
</TabItem>
11823+
<TabItem value="2507" label="2507" default>
1179811824
```json
1179911825
{
1180011826
"role": "system",
@@ -11915,7 +11941,7 @@ curl --location "https://api.mistral.ai/v1/chat/completions" \
1191511941
</Tabs>
1191611942

1191711943
<Tabs groupId="version">
11918-
<TabItem value="2507" label="2507 (new)" default>
11944+
<TabItem value="2509" label="2507/2509 (new)" default>
1191911945
The output of the model will include different chunks of content, but mostly a `thinking` type with the reasoning traces and a `text` type with the answer like so:
1192011946
```json
1192111947
"content": [
@@ -13101,10 +13127,12 @@ Vision capabilities enable models to analyze images and provide insights based o
1310113127
For more specific use cases regarding document parsing and data extraction we recommend taking a look at our Document AI stack [here](../document_ai/document_ai_overview).
1310213128

1310313129
## Models with Vision Capabilities:
13130+
- Mistral Medium 3.1 2508 (`mistral-medium-latest`)
13131+
- Mistral Small 3.2 2506 (`mistral-small-latest`)
13132+
- Magistral Small 1.2 2509 (`magistral-small-latest`)
13133+
- Magistral Medium 1.2 2509 (`magistral-medium-latest`)
1310413134
- Pixtral 12B (`pixtral-12b-latest`)
1310513135
- Pixtral Large 2411 (`pixtral-large-latest`)
13106-
- Mistral Medium 2505 (`mistral-medium-latest`)
13107-
- Mistral Small 2503 (`mistral-small-latest`)
1310813136

1310913137
## Passing an Image URL
1311013138
If the image is hosted online, you can simply provide the URL of the image in the request. This method is straightforward and does not require any encoding.
@@ -13136,7 +13164,7 @@ messages = [
1313613164
},
1313713165
{
1313813166
"type": "image_url",
13139-
"image_url": "https://tripfixers.com/wp-content/uploads/2019/11/eiffel-tower-with-snow.jpeg"
13167+
"image_url": "https://docs.mistral.ai/img/eiffel-tower-paris.jpg"
1314013168
}
1314113169
]
1314213170
}
@@ -13171,7 +13199,7 @@ const chatResponse = await client.chat.complete({
1317113199
{ type: "text", text: "What's in this image?" },
1317213200
{
1317313201
type: "image_url",
13174-
imageUrl: "https://tripfixers.com/wp-content/uploads/2019/11/eiffel-tower-with-snow.jpeg",
13202+
imageUrl: "https://docs.mistral.ai/img/eiffel-tower-paris.jpg",
1317513203
},
1317613204
],
1317713205
},
@@ -13200,7 +13228,7 @@ curl https://api.mistral.ai/v1/chat/completions \
1320013228
},
1320113229
{
1320213230
"type": "image_url",
13203-
"image_url": "https://tripfixers.com/wp-content/uploads/2019/11/eiffel-tower-with-snow.jpeg"
13231+
"image_url": "https://docs.mistral.ai/img/eiffel-tower-paris.jpg"
1320413232
}
1320513233
]
1320613234
}
@@ -13394,51 +13422,6 @@ The chart is a bar chart titled 'France's Social Divide,' comparing socio-econom
1339413422

1339513423
</details>
1339613424

13397-
<details>
13398-
<summary><b>Compare images</b></summary>
13399-
13400-
![](https://tripfixers.com/wp-content/uploads/2019/11/eiffel-tower-with-snow.jpeg)
13401-
13402-
![](https://assets.visitorscoverage.com/production/wp-content/uploads/2024/04/AdobeStock_626542468-min-1024x683.jpeg)
13403-
13404-
```bash
13405-
curl https://api.mistral.ai/v1/chat/completions \
13406-
-H "Content-Type: application/json" \
13407-
-H "Authorization: Bearer $MISTRAL_API_KEY" \
13408-
-d '{
13409-
"model": "pixtral-12b-2409",
13410-
"messages": [
13411-
{
13412-
"role": "user",
13413-
"content": [
13414-
{
13415-
"type": "text",
13416-
"text": "what are the differences between two images?"
13417-
},
13418-
{
13419-
"type": "image_url",
13420-
"image_url": "https://tripfixers.com/wp-content/uploads/2019/11/eiffel-tower-with-snow.jpeg"
13421-
},
13422-
{
13423-
"type": "image_url",
13424-
"image_url": {
13425-
"url": "https://assets.visitorscoverage.com/production/wp-content/uploads/2024/04/AdobeStock_626542468-min-1024x683.jpeg"
13426-
}
13427-
}
13428-
]
13429-
}
13430-
],
13431-
"max_tokens": 300
13432-
}'
13433-
```
13434-
13435-
Model output:
13436-
```
13437-
The first image features the Eiffel Tower surrounded by snow-covered trees and pathways, with a clear view of the tower's intricate iron lattice structure. The second image shows the Eiffel Tower in the background of a large, outdoor stadium filled with spectators, with a red tennis court in the center. The most notable differences are the setting - one is a winter scene with snow, while the other is a summer scene with a crowd at a sporting event. The mood of the first image is serene and quiet, whereas the second image conveys a lively and energetic atmosphere. These differences highlight the versatility of the Eiffel Tower as a landmark that can be enjoyed in various contexts and seasons.
13438-
```
13439-
13440-
</details>
13441-
1344213425
<details>
1344313426
<summary><b>Transcribe receipts</b></summary>
1344413427

@@ -13514,68 +13497,6 @@ Model output:
1351413497

1351513498
</details>
1351613499

13517-
<details>
13518-
<summary><b>OCR with structured output</b></summary>
13519-
13520-
![](https://i.imghippo.com/files/kgXi81726851246.jpg)
13521-
13522-
```bash
13523-
curl https://api.mistral.ai/v1/chat/completions \
13524-
-H "Content-Type: application/json" \
13525-
-H "Authorization: Bearer $MISTRAL_API_KEY" \
13526-
-d '{
13527-
"model": "pixtral-12b-2409",
13528-
"messages": [
13529-
{
13530-
"role": "system",
13531-
"content": [
13532-
{"type": "text",
13533-
"text" : "Extract the text elements described by the user from the picture, and return the result formatted as a json in the following format : {name_of_element : [value]}"
13534-
}
13535-
]
13536-
},
13537-
{
13538-
"role": "user",
13539-
"content": [
13540-
{
13541-
"type": "text",
13542-
"text": "From this restaurant bill, extract the bill number, item names and associated prices, and total price and return it as a string in a Json object"
13543-
},
13544-
{
13545-
"type": "image_url",
13546-
"image_url": "https://i.imghippo.com/files/kgXi81726851246.jpg"
13547-
}
13548-
]
13549-
}
13550-
],
13551-
"response_format":
13552-
{
13553-
"type": "json_object"
13554-
}
13555-
}'
13556-
13557-
```
13558-
13559-
Model output:
13560-
```json
13561-
{'bill_number': '566548',
13562-
'items': [{'item_name': 'BURGER - MED RARE', 'price': 10},
13563-
{'item_name': 'WH/SUB POUTINE', 'price': 2},
13564-
{'item_name': 'BURGER - MED RARE', 'price': 10},
13565-
{'item_name': 'WH/SUB BSL - MUSH', 'price': 4},
13566-
{'item_name': 'BURGER - MED WELL', 'price': 10},
13567-
{'item_name': 'WH BREAD/NO ONION', 'price': 2},
13568-
{'item_name': 'SUB POUTINE - MUSH', 'price': 2},
13569-
{'item_name': 'CHK PESTO/BR', 'price': 9},
13570-
{'item_name': 'SUB POUTINE', 'price': 2},
13571-
{'item_name': 'SPEC OMELET/BR', 'price': 9},
13572-
{'item_name': 'SUB POUTINE', 'price': 2},
13573-
{'item_name': 'BSL', 'price': 8}],
13574-
'total_price': 68}
13575-
```
13576-
13577-
</details>
13578-
1357913500
## FAQ
1358013501

1358113502
- **What is the price per image?**
@@ -13588,6 +13509,8 @@ Model output:
1358813509

1358913510
| Model | Max Resolution | ≈ Formula | ≈ N Max Tokens |
1359013511
| - | - | - | - |
13512+
| Magistral Medium 1.2 | 1540x1540 | `≈ (ResolutionX * ResolutionY) / 784` | ≈ 3025 |
13513+
| Magistral Small 1.2 | 1540x1540 | `≈ (ResolutionX * ResolutionY) / 784` | ≈ 3025 |
1359113514
| Mistral Small 3.2 | 1540x1540 | `≈ (ResolutionX * ResolutionY) / 784` | ≈ 3025 |
1359213515
| Mistral Medium 3 | 1540x1540 | `≈ (ResolutionX * ResolutionY) / 784` | ≈ 3025 |
1359313516
| Mistral Small 3.1 | 1540x1540 | `≈ (ResolutionX * ResolutionY) / 784` | ≈ 3025 |
@@ -14601,7 +14524,7 @@ We offer two types of rate limits:
1460114524

1460214525
Key points to note:
1460314526

14604-
- Rate limits are set at the workspace level.
14527+
- Rate limits are set at the organization level.
1460514528
- Limits are defined by usage tier, where each tier is associated with a different set of rate limits.
1460614529
- In case you need to raise your usage limits, please feel free to contact us by utilizing the support button, providing details about your specific use case.
1460714530

@@ -16214,17 +16137,16 @@ Mistral provides two types of models: open models and premier models.
1621416137
| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
1621516138
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
1621616139
| Mistral Medium 3.1 | | :heavy_check_mark: | Our frontier-class multimodal model released August 2025. Improving tone and performance. Read more about Medium 3 in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2508` | 25.08|
16217-
| Magistral Medium 1.1 | | :heavy_check_mark: | Our frontier-class reasoning model released July 2025. | 40k | `magistral-medium-2507` | 25.07|
16140+
| Magistral Medium 1.2 | | :heavy_check_mark: | Our frontier-class reasoning model update released September 2025 with vision support. | 128k | `magistral-medium-2509` | 25.09|
1621816141
| Codestral 2508 | | :heavy_check_mark: | Our cutting-edge language model for coding released end of July 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-25-08/) | 256k | `codestral-2508` | 25.08|
1621916142
| Voxtral Mini Transcribe | | :heavy_check_mark: | An efficient audio input model, fine-tuned and optimized for transcription purposes only. | | `voxtral-mini-2507` via `audio/transcriptions` | 25.07|
1622016143
| Devstral Medium | | :heavy_check_mark: | An enterprise grade text model, that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral-2507) | 128k | `devstral-medium-2507` | 25.07|
1622116144
| Mistral OCR 2505 | | :heavy_check_mark: | Our OCR service powering our Document AI stack that enables our users to extract interleaved text and images | | `mistral-ocr-2505` | 25.05|
16222-
| Magistral Medium 1 | | :heavy_check_mark: | Our first frontier-class reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-medium-2506` | 25.06|
1622316145
| Ministral 3B | | :heavy_check_mark: | World’s best edge model. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-3b-2410` | 24.10|
1622416146
| Ministral 8B | :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: |Powerful edge model with extremely high performance/price ratio. Learn more in our [blog post](https://mistral.ai/news/ministraux/) | 128k | `ministral-8b-2410` | 24.10|
1622516147
| Mistral Medium 3 | | :heavy_check_mark: | Our frontier-class multimodal model released May 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-medium-3/) | 128k | `mistral-medium-2505` | 25.05|
16226-
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding with the second version released January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501/) | 256k | `codestral-2501` | 25.01|
1622716148
| Mistral Large 2.1 |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our top-tier large model for high-complexity tasks with the lastest version released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `mistral-large-2411` | 24.11|
16149+
| Codestral 2501 | | :heavy_check_mark: | Our cutting-edge language model for coding released in January 2025, Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more in our [blog post](https://mistral.ai/news/codestral-2501) | 256k | `codestral-2501` | 25.01|
1622816150
| Pixtral Large |:heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md)| :heavy_check_mark: | Our first frontier-class multimodal model released November 2024. Learn more in our [blog post](https://mistral.ai/news/pixtral-large/) | 128k | `pixtral-large-2411` | 24.11|
1622916151
| Mistral Small 2| :heavy_check_mark: <br/> [Mistral Research License](https://mistral.ai/licenses/MRL-0.1.md) | :heavy_check_mark: | Our updated small version, released September 2024. Learn more in our [blog post](https://mistral.ai/news/september-24-release) | 32k | `mistral-small-2407` | 24.07|
1623016152
| Mistral Embed | | :heavy_check_mark: | Our state-of-the-art semantic for extracting representation of text extracts | 8k | `mistral-embed` | 23.12|
@@ -16235,15 +16157,13 @@ Mistral provides two types of models: open models and premier models.
1623516157

1623616158
| Model | Weight availability|Available via API| Description | Max Tokens| API Endpoints|Version|
1623716159
|--------------------|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|:--------------------:|
16238-
| Magistral Small 1.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our small reasoning model released July 2025. | 40k | `magistral-small-2507` | 25.07|
16160+
| Magistral Small 1.2 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our small reasoning model released September 2025 with vision support. | 128k | `magistral-small-2509` | 25.09|
1623916161
| Voxtral Small | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our first model with audio input capabilities for instruct use cases. | 32k | `voxtral-small-2507` | 25.07|
1624016162
| Voxtral Mini | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A mini version of our first audio input model. | 32k | `voxtral-mini-2507` | 25.07|
1624116163
| Mistral Small 3.2 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | An update to our previous small model, released June 2025. | 128k | `mistral-small-2506` | 25.06|
16242-
| Magistral Small 1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our first small reasoning model released June 2025. Learn more in our [blog post](https://mistral.ai/news/magistral/) | 40k | `magistral-small-2506` | 25.06|
1624316164
| Devstral Small 1.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | An update to our open source model that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral-2507) | 128k | `devstral-small-2507` | 25.07|
1624416165
| Mistral Small 3.1 | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A new leader in the small models category with image understanding capabilities, released March 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-small-3-1/) | 128k | `mistral-small-2503` | 25.03|
1624516166
| Mistral Small 3| :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A new leader in the small models category, released January 2025. Learn more in our [blog post](https://mistral.ai/news/mistral-small-3) | 32k | `mistral-small-2501` | 25.01|
16246-
| Devstral Small 1| :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A 24B text model, open source model that excels at using tools to explore codebases, editing multiple files and power software engineering agents. Learn more in our [blog post](https://mistral.ai/news/devstral/) | 128k | `devstral-small-2505` | 25.05|
1624716167
| Pixtral 12B | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | A 12B model with image understanding capabilities in addition to text. Learn more in our [blog post](https://mistral.ai/news/pixtral-12b/)| 128k | `pixtral-12b-2409` | 24.09|
1624816168
| Mistral Nemo 12B | :heavy_check_mark: <br/> Apache2 | :heavy_check_mark: | Our best multilingual open source model released July 2024. Learn more in our [blog post](https://mistral.ai/news/mistral-nemo/) | 128k | `open-mistral-nemo`| 24.07|
1624916169

@@ -16255,8 +16175,8 @@ it is recommended to use the dated versions of the Mistral AI API.
1625516175
Additionally, be prepared for the deprecation of certain endpoints in the coming months.
1625616176

1625716177
Here are the details of the available versions:
16258-
- `magistral-medium-latest`: currently points to `magistral-medium-2507`.
16259-
- `magistral-small-latest`: currently points to `magistral-small-2507`.
16178+
- `magistral-medium-latest`: currently points to `magistral-medium-2509`.
16179+
- `magistral-small-latest`: currently points to `magistral-small-2509`.
1626016180
- `mistral-medium-latest`: currently points to `mistral-medium-2508`.
1626116181
- `mistral-large-latest`: currently points to `mistral-medium-2508`, previously `mistral-large-2411`.
1626216182
- `pixtral-large-latest`: currently points to `pixtral-large-2411`.
@@ -16300,6 +16220,11 @@ To prepare for model retirements and version upgrades, we recommend that custome
1630016220
| Mistral Saba 2502 | | `mistral-saba-2502` | 25.02| 2025/06/10|2025/09/30| `mistral-small-latest`|
1630116221
| Mathstral 7B | :heavy_check_mark: <br/> Apache2 | | v0.1| || `magistral-small-latest`|
1630216222
| Codestral Mamba | :heavy_check_mark: <br/> Apache2 |`open-codestral-mamba` | v0.1|2525/06/06 |2525/06/06| `codestral-latest`|
16223+
| Devstral Small 1.0 | :heavy_check_mark: <br/> Apache2 | `devstral-small-2505` | 25.05|2025/10/31|2025/11/30| `devstral-small-latest`|
16224+
| Magistral Small 1.0 | :heavy_check_mark: <br/> Apache2 | `magistral-small-2506` | 25.06|2025/10/31|2025/11/30| `magistral-small-latest`|
16225+
| Magistral Medium 1.0 | | `magistral-medium-2506` | 25.06|2025/10/31|2025/11/30| `magistral-medium-latest`|
16226+
| Magistral Small 1.1 | :heavy_check_mark: <br/> Apache2 | `magistral-small-2507` | 25.07|2025/10/31|2025/11/30| `magistral-small-latest`|
16227+
| Magistral Medium 1.1 | | `magistral-medium-2507` | 25.07|2025/10/31|2025/11/30| `magistral-medium-latest`|
1630316228

1630416229

1630516230
[Model weights]

0 commit comments

Comments
 (0)