Llama 3.2 Vision
Ollama 0.4 adds support for Llama 3.2 Vision. After upgrading or downloading Ollama, run:
ollama run llama3.2-vision
For the larger, 90B version of the model, run:
ollama run llama3.2-vision:90b
What's changed
- Support for Llama 3.2 Vision (i.e. Mllama) architecture
- Sending follow on requests to vision models will now be much faster
- Fixed issues where stop sequences would not be detected correctly
- Ollama can now import models from Safetensors without a Modelfile when running
ollama create my-model
- Fixed issue where redirecting output to a file on Windows would cause invalid characters to be written
- Fixed issue where invalid model data would cause Ollama to error
Full Changelog: v0.3.14...v0.4.0