Bug Fixes
- Memory recall loop (#583):
build_memory_section()no longer tells the model to callmemory_recallwhen memories are already injected into the prompt. Models now use provided memories directly. - Raw errors in channels (#584): Channel bridge sanitizes LLM error messages before sending to users. Rate limits, auth errors, and JSON dumps are replaced with clean user-friendly messages.
- HAND.toml format (#588): Parser now accepts both flat root-level format and the documented
[hand]table format. - Token quota exceeded (#591): Pre-emptive quota-aware compaction triggers before LLM calls when session token count approaches remaining hourly quota headroom.
- log_level config (#594):
log_levelin config.toml now takes effect. Priority:RUST_LOGenv var > config.tomllog_level> default"info". - Max iterations error (#599): Error message now includes guidance on configuring
[autonomous] max_iterationsin agent.toml. - Config backup (#578): config.toml is backed up to config.toml.bak before any auto-rewrite (provider key save, config set/unset).
Enhancements
- Default model in Web UI (#593): Spawn wizard fetches default_provider/default_model from
/api/statusinstead of hardcoding groq/llama-3.3-70b-versatile.