This release includes some backwards-incompatible changes:
- The
-4option for GPT-4 is now-m 4. - The
--codeoption has been removed. - The
-soption has been removed as streaming is now the default. Use--no-streamto opt out of streaming.
Prompt templates
Prompt templates is a new feature that allows prompts to be saved as templates and re-used with different variables.
Templates can be created using the llm templates edit command:
llm templates edit summarizeTemplates are YAML - the following template defines summarization using a system prompt:
system: Summarize this textThe template can then be executed like this:
cat myfile.txt | llm -t summarizeTemplates can include both system prompts, regular prompts and indicate the model they should use. They can reference variables such as $input for content piped to the tool, or other variables that are passed using the new -p/--param option.
This example adds a voice parameter:
system: Summarize this text in the voice of $voiceThen to run it (via strip-tags to remove HTML tags from the input):
curl -s 'https://til.simonwillison.net/macos/imovie-slides-and-audio' | \
strip-tags -m | llm -t summarize -p voice GlaDOSExample output:
My previous test subject seemed to have learned something new about iMovie. They exported keynote slides as individual images [...] Quite impressive for a human.
The Prompt templates documentation provides more detailed examples.
Continue previous chat
You can now use llm to continue a previous conversation with the OpenAI chat models (gpt-3.5-turbo and gpt-4). This will include your previous prompts and responses in the prompt sent to the API, allowing the model to continue within the same context.
Use the new -c/--continue option to continue from the previous message thread:
llm "Pretend to be a witty gerbil, say hi briefly"Greetings, dear human! I am a clever gerbil, ready to entertain you with my quick wit and endless energy.
llm "What do you think of snacks?" -cOh, how I adore snacks, dear human! Crunchy carrot sticks, sweet apple slices, and chewy yogurt drops are some of my favorite treats. I could nibble on them all day long!
The -c option will continue from the most recent logged message.
To continue a different chat, pass an integer ID to the --chat option. This should be the ID of a previously logged message. You can find these IDs using the llm logs command.
Thanks Amjith Ramanujam for contributing to this feature. #6
New mechanism for storing API keys
API keys for language models such as those by OpenAI can now be saved using the new llm keys family of commands.
To set the default key to be used for the OpenAI APIs, run this:
llm keys set openaiThen paste in your API key.
Keys can also be passed using the new --key command line option - this can be a full key or the alias of a key that has been previously stored.
See link-to-docs for more. #13
New location for the logs.db database
The logs.db database that stores a history of executed prompts no longer lives at ~/.llm/log.db - it can now be found in a location that better fits the host operating system, which can be seen using:
llm logs pathOn macOS this is ~/Library/Application Support/io.datasette.llm/logs.db.
To open that database using Datasette, run this:
datasette "$(llm logs path)"You can upgrade your existing installation by copying your database to the new location like this:
cp ~/.llm/log.db "$(llm logs path)"
rm -rf ~/.llm # To tidy up the now obsolete directoryThe database schema has changed, and will be updated automatically the first time you run the command.
That schema is included in the documentation. #35
Other changes
- New
llm logs --truncateoption (shortcut-t) which truncates the displayed prompts to make the log output easier to read. #16 - Documentation now spans multiple pages and lives at https://llm.datasette.io/ #21
- Default
llm chatgptcommand has been renamed tollm prompt. #17 - Removed
--codeoption in favour of new prompt templates mechanism. #24 - Responses are now streamed by default, if the model supports streaming. The
-s/--streamoption has been removed. A new--no-streamoption can be used to opt-out of streaming. #25 - The
-4/--gpt4option has been removed in favour of-m 4or-m gpt4, using a new mechanism that allows models to have additional short names. - The new
gpt-3.5-turbo-16kmodel with a 16,000 token context length can now also be accessed using-m chatgpt-16kor-m 3.5-16k. Thanks, Benjamin Kirkbride. #37 - Improved display of error messages from OpenAI. #15