Skip to content

Releases: simonw/llm

0.25a0

11 Apr 00:28
Compare
Choose a tag to compare
0.25a0 Pre-release
Pre-release
  • llm models --options now shows keys and environment variables for models that use API keys. Thanks, Steve Morin. #903
  • Added py.typed marker file so LLM can now be used as a dependency in projects that use mypy without a warning. #887
  • $ characters can now be used in templates by escaping them as $$. Thanks, @guspix. #904
  • LLM now uses pyproject.toml instead of setup.py. #908

0.24.2

09 Apr 03:01
Compare
Choose a tag to compare
  • Fixed a bug on Windows with the new llm -t path/to/file.yaml feature. #901

0.24.1

08 Apr 20:41
Compare
Choose a tag to compare
  • Templates can now be specified as a path to a file on disk, using llm -t path/to/file.yaml. This makes them consistent with how -f fragments are loaded. #897
  • llm logs backup /tmp/backup.db command for backing up your logs.db database. #879

0.24

07 Apr 15:40
Compare
Choose a tag to compare

Support for fragments to help assemble prompts for long context models. Improved support for templates to support attachments and fragments. New plugin hooks for providing custom loaders for both templates and fragments. See Long context support in LLM 0.24 using fragments and template plugins for more on this release.

The new llm-docs plugin demonstrates these new features. Install it like this:

llm install llm-docs

Now you can ask questions of the LLM documentation like this:

llm -f docs: 'How do I save a new template?'

The docs: prefix is registered by the plugin. The plugin fetches the LLM documentation for your installed version (from the docs-for-llms repository) and uses that as a prompt fragment to help answer your question.

Two more new plugins are llm-templates-github and llm-templates-fabric.

llm-templates-github lets you share and use templates on GitHub. You can run my Pelican riding a bicycle benchmark against a model like this:

llm install llm-templates-github
llm -t gh:simonw/pelican-svg -m o3-mini

This executes this pelican-svg.yaml template stored in my simonw/llm-templates repository, using a new repository naming convention.

To share your own templates, create a repository on GitHub under your user account called llm-templates and start saving .yaml files to it.

llm-templates-fabric provides a similar mechanism for loading templates from Daniel Miessler's fabric collection:

llm install llm-templates-fabric
curl https://simonwillison.net/2025/Apr/6/only-miffy/ | \
  llm -t f:extract_main_idea

Major new features:

  • New fragments feature. Fragments can be used to assemble long prompts from multiple existing pieces - URLs, file paths or previously used fragments. These will be stored de-duplicated in the database avoiding wasting space storing multiple long context pieces. Example usage: llm -f https://llm.datasette.io/robots.txt 'explain this file'. #617
  • The llm logs file now accepts -f fragment references too, and will show just logged prompts that used those fragments.
  • register_template_loaders() plugin hook allowing plugins to register new prefix:value custom template loaders. #809
  • register_fragment_loaders() plugin hook allowing plugins to register new prefix:value custom fragment loaders. #886
  • llm fragments family of commands for browsing fragments that have been previously logged to the database.
  • The new llm-openai plugin provides support for o1-pro (which is not supported by the OpenAI mechanism used by LLM core). Future OpenAI features will migrate to this plugin instead of LLM core itself.

Improvements to templates:

  • llm -t $URL option can now take a URL to a YAML template. #856
  • Templates can now store default model options. #845
  • Executing a template that does not use the $input variable no longer blocks LLM waiting for input, so prompt templates can now be used to try different models using llm -t pelican-svg -m model_id. #835
  • llm templates command no longer crashes if one of the listed template files contains invalid YAML. #880
  • Attachments can now be stored in templates. #826

Other changes:

  • New llm models options family of commands for setting default options for particular models. #829
  • llm logs list, llm schemas list and llm schemas show all now take a -d/--database option with an optional path to a SQLite database. They used to take -p/--path but that was inconsistent with other commands. -p/--path still works but is excluded from --help and will be removed in a future LLM release. #857
  • llm logs -e/--expand option for expanding fragments. #881
  • llm prompt -d path-to-sqlite.db option can now be used to write logs to a custom SQLite database. #858
  • llm similar -p/--plain option providing more human-readable output than the default JSON. #853
  • llm logs -s/--short now truncates to include the end of the prompt too. Thanks, Sukhbinder Singh. #759
  • Set the LLM_RAISE_ERRORS=1 environment variable to raise errors during prompts rather than suppressing them, which means you can run python -i -m llm 'prompt' and then drop into a debugger on errors with import pdb; pdb.pm(). #817
  • Improved --help output for llm embed-multi. #824
  • llm models -m X option which can be passed multiple times with model IDs to see the details of just those models. #825
  • OpenAI models now accept PDF attachments. #834
  • llm prompt -q gpt -q 4o option - pass -q searchterm one or more times to execute a prompt against the first model that matches all of those strings - useful for if you can't remember the full model ID. #841
  • OpenAI compatible models configured using extra-openai-models.yaml now support supports_schema: true, vision: true and audio: true options. Thanks @adaitche and @giuli007. #819, #843

0.24a1

07 Apr 00:43
Compare
Choose a tag to compare
0.24a1 Pre-release
Pre-release
  • New Fragments feature. #617
  • register_fragment_loaders() plugin hook. #809

0.24a0

01 Mar 06:48
Compare
Choose a tag to compare
0.24a0 Pre-release
Pre-release
  • Alpha release with experimental register_template_loaders() plugin hook. #809

0.23

28 Feb 16:57
Compare
Choose a tag to compare

Support for schemas, for getting supported models to output JSON that matches a specified JSON schema. See also Structured data extraction from unstructured content using LLM schemas for background on this feature. #776

  • New llm prompt --schema '{JSON schema goes here} option for specifying a schema that should be used for the output from the model. The schemas documentation has more details and a tutorial.
  • Schemas can also be defined using a concise schema specification, for example llm prompt --schema 'name, bio, age int'. #790
  • Schemas can also be specified by passing a filename and through several other methods. #780
  • New llm schemas family of commands: llm schemas list, llm schemas show, and llm schemas dsl for debugging the new concise schema language. #781
  • Schemas can now be saved to templates using llm --schema X --save template-name or through modifying the template YAML. #778
  • The llm logs command now has new options for extracting data collected using schemas: --data, --data-key, --data-array, --data-ids. #782
  • New llm logs --id-gt X and --id-gte X options. #801
  • New llm models --schemas option for listing models that support schemas. #797
  • model.prompt(..., schema={...}) parameter for specifying a schema from Python. This accepts either a dictionary JSON schema definition or a Pydantic BaseModel subclass, see schemas in the Python API docs.
  • The default OpenAI plugin now enables schemas across all supported models. Run llm models --schemas for a list of these.
  • The llm-anthropic and llm-gemini plugins have been upgraded to add schema support for those models. Here's documentation on how to add schema support to a model plugin.

Other smaller changes:

  • GPT-4.5 preview is now a supported model: llm -m gpt-4.5 'a joke about a pelican and a wolf' #795
  • The prompt string is now optional when calling model.prompt() from the Python API, so model.prompt(attachments=llm.Attachment(url=url))) now works. #784
  • extra-openai-models.yaml now supports a reasoning: true option. Thanks, Kasper Primdal Lauritzen. #766
  • LLM now depends on Pydantic v2 or higher. Pydantic v1 is no longer supported. #520

0.23a0

27 Feb 01:09
Compare
Choose a tag to compare
0.23a0 Pre-release
Pre-release

Alpha release adding support for schemas, for getting supported models to output JSON that matches a specified JSON schema. #776

  • llm prompt --schema '{JSON schema goes here} option for specifying a schema that should be used for the output from the model, see schemas in the CLI docs.
  • model.prompt(..., schema={...}) parameter for specifying a schema from Python. This accepts either a dictionary JSON schema definition of a Pydantic BaseModel subclass, see schemas in the Python API docs.
  • The default OpenAI plugin now supports schemas across all models.
  • Documentation on how to add schema support to a model plugin.
  • LLM now depends on Pydantic v2 or higher. Pydantic v1 is no longer supported. #520

0.22

17 Feb 04:37
Compare
Choose a tag to compare

See also LLM 0.22, the annotated release notes.

  • Plugins that provide models that use API keys can now subclass the new llm.KeyModel and llm.AsyncKeyModel classes. This results in the API key being passed as a new key parameter to their .execute() methods, and means that Python users can pass a key as the model.prompt(..., key=) - see Passing an API key. Plugin developers should consult the new documentation on writing Models that accept API keys. #744
  • New OpenAI model: chatgpt-4o-latest. This model ID accesses the current model being used to power ChatGPT, which can change without warning. #752
  • New llm logs -s/--short flag, which returns a greatly shortened version of the matching log entries in YAML format with a truncated prompt and without including the response. #737
  • Both llm models and llm embed-models now take multiple -q search fragments. You can now search for all models matching "gemini" and "exp" using llm models -q gemini -q exp. #748
  • New llm embed-multi --prepend X option for prepending a string to each value before it is embedded - useful for models such as nomic-embed-text-v2-moe that require passages to start with a string like "search_document: ". #745
  • The response.json() and response.usage() methods are now documented.
  • Fixed a bug where conversations that were loaded from the database could not be continued using asyncio prompts. #742
  • New plugin for macOS users: llm-mlx, which provides extremely high performance access to a wide range of local models using Apple's MLX framework.
  • The llm-claude-3 plugin has been renamed to llm-anthropic.

0.21

31 Jan 20:36
Compare
Choose a tag to compare
  • New model: o3-mini. #728
  • The o3-mini and o1 models now support a reasoning_effort option which can be set to low, medium or high.
  • llm prompt and llm logs now have a --xl/--extract-last option for extracting the last fenced code block in the response - a complement to the existing --x/--extract option. #717