New Features
Multi-MCP Server Support
- Configure additional MCP servers via
mcp_servers.json - n8n remains in
.envas foundational server - Tools from non-n8n servers prefixed with
server_name__to avoid collisions - Supports SSE and streamable_http transports
- Tested with Home Assistant MCP and Memento memory server
Runtime Settings UI
- Configure agent settings from the web frontend
- Change TTS voice, model, temperature without rebuilding
- Settings persist in
settings.json
Context Management
- Tool data cache preserves structured responses for follow-up queries
- Sliding window keeps conversation manageable (
OLLAMA_MAX_TURNS) - Prevents context overflow - system prompt never truncated
Frontend Improvements
- Tool use indicator shows if response used a tool (green wrench)
- Reload tools button in control bar
- Click tool indicator to see parameters used
Bug Fixes
- Fixed TTS reading scores like "30-23" as "minus" → now "30 to 23"
- Fixed
OLLAMA_NUM_CTXdefault to 8192 - Fixed Windows line ending issues (.gitattributes)
Configuration
New .env options:
OLLAMA_MAX_TURNS- Max conversation turns in sliding window (default: 20)TOOL_CACHE_SIZE- Tool responses to cache (default: 3)
New config file:
mcp_servers.json- Additional MCP servers (optional)