Headline
- Enhanced support for FastFlowLM including port auto-selection and gpt-oss-20b
- Debug logs are now easily accessible from within the web ui
- llama-bench is an available llamacpp benchmarking option in the dev CLI
What's Changed
- Add addition acknowledgement for FLM by @ZaneNi in #435
- Enable dynamic port finding in FLM server by @ZaneNi in #436
- Fix gpt-oss-20b size field in server_models.json by @jeremyfowers in #439
- Add gpt-oss-20b to FLM by @Tetramatrix in #445
- Fix windows conda CI workflows by @jeremyfowers in #443
- Fix typo in example script in concepts.md by @jasonhernandez in #446
- Integrating llama-bench.exe into lemonade CLI by @amd-pworfolk in #384
- Log Reference by @siavashhub in #447
- Rev version and add hidden playable1 model by @jeremyfowers in #450
New Contributors
- @ZaneNi made their first contribution in #435
- @Tetramatrix made their first contribution in #445
- @jasonhernandez made their first contribution in #446
Full Changelog: v8.1.11...v8.1.12