OpenAI O1-mini and O3-mini are added as built-in models! 🔥 You can add other O series models with "OpenAI" provider as well (Please confirm your tier with OpenAI and check if you have access to their O series API).
And we have a much better model table in the setting where you can add your own "display name" to your model, mark their capabilities "vision", "reasoning", "websearch", and drag-and-drop reorder them as you like! Thanks to @Emt-lin for the implementation!
⚠️ Announcement for Believers
For those who used copilot-plus-large
to index their vault must do a force re-index to keep it working. We found the provider unstable so we switched to another provider. As the product matures there won't be such changes anymore. Sorry for the disruption 🙏
Improvements
- #1225 Support custom model displayNames and reorderable Model list. @Emt-lin
- #1232 Adding support for Mistral as an LLM provider @o-mikhailovskii
- #1240 Add configurable batch size, update embedding requests per min @logancyang
- #1239 Add ModelCapability enum and capability detection @logancyang
- #1223 feat: update Gemini model names to v2.0 @anpigon
- #1238 Add openai o-series support @logancyang
- #1220 refactor: Improve source links formatting and rendering. @iinkov
- #1207 refactor: optimize the switching experience of the model. @Emt-lin
- #1242 Reduce binary size @zeroliu
Bug Fixes
- #1243 Fixed apikey not switching in custom model form @Emt-lin
- #1245 Remove custom base URL fallback in YouTube transcript retrieval @logancyang
- #1237 Update copilot-plus-large @logancyang
- #1227 Fix max tokens passing @logancyang
- #1226 fix: Handle undefined activeEmbeddingModels in settings sanitization @logancyang