Highlights
This is an incremental release with support for Docker Model Runner and improvements to the AI functionality introduced in v0.10.0.
Docker Model Runner
Colima now supports Docker Model Runner as an AI model runner backend.
Docker Model Runner is now the default due to its simpler requirements. However, Ramalama can still be used.
# run a model (uses docker runner by default)
colima model run gemma3
# serve a model, chat interface available at localhost:8080
colima model serve gemma3
# explicitly specify the runner
colima model run gemma3 --runner docker
colima model run gemma3 --runner ramalama
# set the runner at start time
colima start --model-runner ramalamaThe runner can be configured via:
--runnerflag oncolima modelcommands--model-runnerflag atcolima startmodelRunnerin the configuration file
Other Updates
- The
DOCKER_CONFIGenvironment variable is now respected.
Commits
- chore: propose fix a typo by @jeis4wpi in #1506
- docs: add install instructions for krunkit by @alanpmullane in #1511
- docker: respect DOCKER_CONFIG by @utkarshgupta137 in #1512
- ai: add docker model runner. by @abiosoft in #1513
- ai: choose alternate available ports for serving API/webui by @abiosoft in #1515
- ai: refactor model runners by @abiosoft in #1516
- core: update disk images by @abiosoft in #1517
New Contributors
- @jeis4wpi made their first contribution in #1506
- @alanpmullane made their first contribution in #1511
- @utkarshgupta137 made their first contribution in #1512
Full Changelog: v0.10.0...v0.10.1