Use the model you like! 🥳🎉
Now, starting from 1.2-pre version, tlm will deprecate the use of Modelfiles and will be able to work with any base model without creating it's own. That was the most wanted request from the earlier discussions. Initially, I wanted to abstract user from underlying model so they can just focus on getting good results. But, with the boom of the new open-source models, I've decided not to have an opinion on which model to use. Users can choose and decide which one is the best!
Changelog
- Removal of the Modelfile approach.
tlm
now will use base models without requiring custom model creation. tlm config
will list all available Ollama models and let you to select a default model to work with.- The default model is now
qwen2.5-coder:3b
which is accurate and blazing fast at the same time.
Full Changelog: 1.1...1.2-pre