Summary
TradingAgents v0.2.4 ships structured-output decision agents, opt-in checkpoint resume, a persistent decision log with outcome-grounded reflections, four new LLM providers, and a Docker image.
Structured-Output Decision Agents
- Research Manager, Trader, and Portfolio Manager use
llm.with_structured_output(Schema)on their primary call and return typed Pydantic instances. - Each provider's native structured-output mode is selected automatically (json_schema for OpenAI / xAI, response_schema for Gemini, tool-use for Anthropic, function-calling for OpenAI-compatible providers).
- Render helpers preserve the existing markdown shape, so memory log, CLI display, and saved reports keep working unchanged.
- Five-tier rating scale (Buy / Overweight / Hold / Underweight / Sell) used consistently across Research Manager, Portfolio Manager, signal processor, and the memory log.
Persistence & Recovery
- LangGraph checkpoint resume via
--checkpoint. State is saved after each node so crashed or interrupted runs resume from the last successful step. Per-ticker SQLite databases under~/.tradingagents/cache/checkpoints/. - Persistent decision log replaces the per-agent BM25 memory. Decisions are stored automatically at the end of every analysis; the next same-ticker run resolves prior pending entries with realised return, alpha vs SPY, and a one-paragraph reflection.
- Optional
memory_log_max_entriesconfig caps resolved entries; pending entries are never pruned.
Provider Coverage
- DeepSeek, Qwen (Alibaba DashScope), GLM (Zhipu), and Azure OpenAI providers added.
- Dynamic OpenRouter model selection.
- Default
backend_urlis nowNoneso each provider client falls back to its native endpoint instead of leaking the OpenAI URL into Gemini and other clients.
Cross-Platform & Deployment
- Docker support with multi-stage build for cross-platform deployment.
- Cache and log directories moved to
~/.tradingagents/to resolve Docker permission issues. - All file I/O passes explicit
encoding="utf-8", so Windows users no longer hitUnicodeEncodeError.
Stability
- Empty memory no longer triggers fabricated past-lessons; the redesigned memory layer makes this structurally impossible.
SignalProcessorreads the rating from the rendered Portfolio Manager markdown via a deterministic heuristic, eliminating an extra LLM call per analysis.- OpenAI structured-output calls default to
method="function_calling"to avoid noisy Pydantic serialization warnings from langchain-openai's Responses-API parse path. - Tool-call logging processes every chunk message; memory score normalization handles empty arrays.
- Pytest fixtures (lazy LLM client imports plus placeholder API keys) so the test suite runs cleanly without credentials.
Acknowledgments
We thank the community contributors who shaped this release. The full list is in the CHANGELOG.