- New model,
o1. This model does not yet support streaming. #676 o1-previewando1-minimodels now support streaming.- New models,
gpt-4o-audio-previewandgpt-4o-mini-audio-preview. #677 llm prompt -x/--extractoption, which returns just the content of the first fenced code block in the response. Tryllm prompt -x 'Python function to reverse a string'. #681- Creating a template using
llm ... --save xnow supports the-x/--extractoption, which is saved to the template. YAML templates can set this option usingextract: true. - New
llm logs -x/--extractoption extracts the first fenced code block from matching logged responses.
- Creating a template using
- New
llm models -q 'search'option returning models that case-insensitively match the search query. #700 - Installation documentation now also includes
uv. Thanks, Ariel Marcus. #690 and #702 llm modelscommand now shows the current default model at the bottom of the listing. Thanks, Amjith Ramanujam. #688- Plugin directory now includes
llm-venice,llm-bedrock,llm-deepseekandllm-cmd-comp. - Fixed bug where some dependency version combinations could cause a
Client.__init__() got an unexpected keyword argument 'proxies'error. #709 - OpenAI embedding models are now available using their full names of
text-embedding-ada-002,text-embedding-3-smallandtext-embedding-3-large- the previous names are still supported as aliases. Thanks, web-sst. #654