- Support for OpenAI's new GPT-4o model:
llm -m gpt-4o 'say hi in Spanish'#490 - The
gpt-4-turboalias is now a model ID, which indicates the latest version of OpenAI's GPT-4 Turbo text and image model. Your existinglogs.dbdatabase may contain records under the previous model ID ofgpt-4-turbo-preview. #493 - New
llm logs -r/--responseoption for outputting just the last captured response, without wrapping it in Markdown and accompanying it with the prompt. #431 - Nine new {ref}
plugins <plugin-directory>since version 0.13:- llm-claude-3 supporting Anthropic's Claude 3 family of models.
- llm-command-r supporting Cohere's Command R and Command R Plus API models.
- llm-reka supports the Reka family of models via their API.
- llm-perplexity by Alexandru Geana supporting the Perplexity Labs API models, including
llama-3-sonar-large-32k-onlinewhich can search for things online andllama-3-70b-instruct. - llm-groq by Moritz Angermann providing access to fast models hosted by Groq.
- llm-fireworks supporting models hosted by Fireworks AI.
- llm-together adds support for the Together AI extensive family of hosted openly licensed models.
- llm-embed-onnx provides seven embedding models that can be executed using the ONNX model framework.
- llm-cmd accepts a prompt for a shell command, runs that prompt and populates the result in your shell so you can review it, edit it and then hit
<enter>to execute orctrl+cto cancel, see this post for details.