- The output format for
llm logshas changed. Previously it was JSON - it's now a much more readable Markdown format suitable for pasting into other documents. #160- The new
llm logs --jsonoption can be used to get the old JSON format. - Pass
llm logs --conversation IDor--cid IDto see the full logs for a specific conversation.
- The new
- You can now combine piped input and a prompt in a single command:
cat script.py | llm 'explain this code'. This works even for models that do not support system prompts. #153 - Additional OpenAI-compatible models can now be configured with custom HTTP headers. This enables platforms such as openrouter.ai to be used with LLM, which can provide Claude access even without an Anthropic API key.
- Keys set in
keys.jsonare now used in preference to environment variables. #158 - The documentation now includes a plugin directory listing all available plugins for LLM. #173
- New related tools section in the documentation describing
ttok,strip-tagsandsymbex. #111 - The
llm models,llm aliasesandllm templatescommands now default to running the same command asllm models listandllm aliases listandllm templates list. #167 - New
llm keys(akallm keys list) command for listing the names of all configured keys. #174 - Two new Python API functions,
llm.set_alias(alias, model_id)andllm.remove_alias(alias)can be used to configure aliases from within Python code. #154 - LLM is now compatible with both Pydantic 1 and Pydantic 2. This means you can install
llmas a Python dependency in a project that depends on Pydantic 1 without running into dependency conflicts. Thanks, Chris Mungall. #147 llm.get_model(model_id)is now documented as raisingllm.UnknownModelErrorif the requested model does not exist. #155