Updates:
- You can now run Transformer Lab as a web app! (details to follow)
- Upgraded MLX to support several new models including Gemma3.
- Embeddings: Set separate embedding model per experiment and use that model on Embeddings tab
- Plugins: Display supported hardware architecture, new plugin for generating quality RAG QnA pairs
- Basic anonymized analytics that you can opt-out of (details to follow)
- Several bugfixes including issues loading and deleting adapters. and for documents within folders
New Model Support:
- We now support the new LGAI Exaone Deep Models (2.4B, 7.8B and 32B)
- We also support the Mistral Small 3.1 24B Instruct 2503
- Added support includes enabling training Gemma 3 on MLX
More details
- Add 0 values to radar chart inplace of undefined to stop breaking by @deep1401 in #307
- add a tint color to the background of plugins by @aliasaria in #310
- show supported architectures in plugins list by @aliasaria in #311
- add workflows(at least very basic support) by @sanjaycal in #312
- Fix clashing with plugin config fields which have the words dataset in it by @deep1401 in #316
- detailed metric comparisons can be done up and down by @aliasaria in #318
- tasks by @sanjaycal in #319
- Remove duplicate timestamp from below the eval name by @deep1401 in #320
- Add/embedding model info by @abhimazu in #321
- Add start:cloud target to serve TransformerLab on web by @dadmobile in #322
Full Changelog: v0.11.0...v0.11.1