github RightNow-AI/openfang v0.4.1

latest releases: v0.4.3, v0.4.2
6 hours ago

Bug Fixes

  • Memory recall loop (#583): build_memory_section() no longer tells the model to call memory_recall when memories are already injected into the prompt. Models now use provided memories directly.
  • Raw errors in channels (#584): Channel bridge sanitizes LLM error messages before sending to users. Rate limits, auth errors, and JSON dumps are replaced with clean user-friendly messages.
  • HAND.toml format (#588): Parser now accepts both flat root-level format and the documented [hand] table format.
  • Token quota exceeded (#591): Pre-emptive quota-aware compaction triggers before LLM calls when session token count approaches remaining hourly quota headroom.
  • log_level config (#594): log_level in config.toml now takes effect. Priority: RUST_LOG env var > config.toml log_level > default "info".
  • Max iterations error (#599): Error message now includes guidance on configuring [autonomous] max_iterations in agent.toml.
  • Config backup (#578): config.toml is backed up to config.toml.bak before any auto-rewrite (provider key save, config set/unset).

Enhancements

  • Default model in Web UI (#593): Spawn wizard fetches default_provider/default_model from /api/status instead of hardcoding groq/llama-3.3-70b-versatile.

Don't miss a new openfang release

NewReleases is sending notifications on new releases.