π nanobot v0.1.5.post2 is here π β 67 PRs merged, 12 new contributors. The agent's world got bigger and steadier.
If v0.1.5.post1 was about the agent learning to manage itself, v0.1.5.post2 is about reach and polish. Windows and Python 3.14 joined the supported matrix. The read_file tool learned to understand DOCX, XLSX, and PPTX. Microsoft Teams arrived as a channel. The OpenAI-compatible API started streaming via SSE. And beneath all of that, ~50 smaller fixes across cron, memory, retry, session files, and provider quirks kept turning "it works" into "it's solid." A dedicated browser UI also started taking shape in the repo β source preview for now.
Highlights
-
Windows + Python 3.14 β first-class support β A full CI matrix covering Windows runners and Python 3.14 landed, along with install markers and runtime fixes for the quirks those platforms expose. If you've been running nanobot on WSL because "Windows didn't quite work," you can stop. (#3194)
-
Office documents, natively readable β The
read_filetool now extracts text from DOCX, XLSX, and PPTX β tables and grouped shapes included, workbook handles closed safely. Combined with the earlier PDF support, the agent can open whatever format your colleague just emailed you without bouncing through a converter. (#3336, #3269, #3353) -
OpenAI-compatible API β SSE streaming β
/v1/chat/completionsnow emits SSE chunks whenstream=true, wiring up the existingon_stream/on_stream_endcallbacks. Any client built for the OpenAI API β LangChain, LlamaIndex, your own frontend β gets live deltas from nanobot instead of waiting for the whole response. The endpoint also stopped terminating streams with a success marker after a backend failure, so errors surface honestly. (#3222, #3262) -
Microsoft Teams, MiniMax thinking, LM Studio, MyTool β Microsoft Teams joined the channel roster. MiniMax got a dedicated Anthropic-style thinking endpoint plus a
reasoning_effortβreasoning_splitmapping fix. LM Studio is now supported via nullable API keys for local servers that don't expect one. The new MyTool lets the agent introspect its own runtime configuration β and hides sensitive nested config fields incheckoutput so self-inspection doesn't leak secrets. (#3197, #3160, #3363, #3186, #3177, #3261) -
Reliability β the unglamorous half of a release β Session files now use atomic writes with corrupt-file repair, so a bad shutdown no longer eats your history. Memory cursor recovery handles non-integer corruption. Auto-compact skips sessions with active tasks and unifies summary injection across consolidation paths. Providers gain a circuit breaker for Responses API fallback and recognize ZhiPu 1302 rate limits. Cron stops leaking intermediate progress, its tool schema works with OpenAI Codex/Responses, and retry heartbeats no longer spill into user channels. Subagent follow-ups persist in session history. Half of these you'll never notice β which is exactly the point. (#3312, #3340, #3081, #3304, #3302, #3356, #3320, #3295, #3229, #3242)
-
Channels β quieter and sharper β Telegram gained mid-stream splitting for long replies and better markdown rendering for modern LLM output. Discord stopped treating bot-to-bot messages as self-loops and added channel-based allow-lists. Email deduplicates SPF/DKIM-rejected messages to stop log spam and ignores self-sent mailbox messages. WeCom parses mixed inbound messages correctly. Each individually small; together they mean fewer "why did my bot do that" moments. (#3329, #3355, #3280, #3171, #3325, #3228, #3161)
-
WebUI β early preview, source only β A dedicated
webui/has landed in the repo with a WebSocket chat flow, i18n locale switcher, Apple-inspired typography with CJK support, and live dark-mode code-block theming. The underlying WebSocket channel also learned to multiplex multiplechat_ids over a single connection. Heads up: this is source-preview only β the WebUI is intentionally not bundled into the published wheel yet. If you want to try it, clone the repo and run it fromwebui/. A packaged release will follow once the UX settles. (#3310, #3272, #3314, #3306)
Community
v0.1.5.post2 is what a release looks like when the foundation is done and people start finishing things. A Windows CI matrix from @JiajunBernoulli. Native Office document extraction from @aiguozhi123456. Session atomic writes from the same author, catching a real data-loss edge case. A Telegram mid-stream split that makes long replies actually readable. Twelve new contributors, and a lot of returning ones quietly landing the things that were on everyone's wish list. The agent runs on Windows. It reads your documents. It keeps its history even when the power blinks. And there's a UI taking shape in webui/ for whoever wants to peek. That's a post release doing post-release work β and then some.
What's Changed
- fix(agent): skip auto-compact for sessions with active agent tasks by @chengyongru in #3085
- fix(provider): recover trailing assistant as user to prevent Zhipu 1214 by @chengyongru in #3086
- fix(log): remove noisy no-op logs from auto-compact by @chengyongru in #3094
- fix(log): only log auto-compact when messages are actually archived by @chengyongru in #3099
- refactor(context): deduplicate system prompt by @chengyongru in #3162
- fix(channel.wecom): inbound mixed msg parse by @dzydzydzy7 in #3161
- fix(providers): guard chat_with_retry against explicit None max_tokens (#3102) by @04cb in #3157
- fix(cron): respect deliver flag when agent produces output by @chengyongru in #3168
- feat(provider): add MiniMax Anthropic endpoint for thinking mode ζ·»ε minimaxηζθζ¨‘εΌ by @Aisht669 in #3160
- fix(discord): remove duplicate channel_id assignment in message handler by @LeoFYH in #3178
- Add support for nullable API keys and LM Studio by @sohamb117 in #3186
- fix(memory): handle missing cursor key in history entries by @JiajunBernoulli in #3195
- feat(msteams): add Microsoft Teams channel by @chengyongru in #3197
- fix(skills): use yaml.safe_load for frontmatter parsing to handle multiline descriptions by @yanghan-cyber in #3141
- fix(status): correct context percentage and sync consolidator by @chengyongru in #3209
- fix(api): prevent upload filename collisions and reject unsupported image URLs by @mohamed-elkholy95 in #3187
- perf(tools): cache ToolRegistry.get_definitions() between mutations by @chengyongru in #3210
- feat(agent): add MyTool for runtime self-inspection by @chengyongru in #3177
- feat(api): afeat(api): add SSE streaming for /v1/chat/completions by @wanghesong2019 in #3222
- feat: add channel-based filtering for Discord by @Lbin91 in #3171
- feat(dream): git-based section age annotations for memory staleness by @chengyongru in #3212
- fix: pass apiBase from config to transcription providers (Groq & OpenAI) by @chengyongru in #3237
- feat(windows): Windows + Python 3.14 support β CI matrix, install markers, runtime fixes by @JiajunBernoulli in #3194
- fix(agent): preserve user message to prevent GLM error 1214 by @chengyongru in #3233
- fix(email): ignore self-sent mailbox messages by @yorkhellen in #3228
- fix(exec): pass allowed_env_keys to exec tool calls in subagents by @mcampo in #3238
- fix(docs): update channel plugin build backend to hatchling by @JiajunBernoulli in #3192
- fix(memory): fall back to raw_archive on LLM error response by @chengyongru in #3248
- fix: guard tool execution against non-compliant API gateway injection #3220 by @subalkum in #3225
- fix: make cron tool schema require message for add action by @sicnuyudidi in #3163
- fix(api): avoid success-style SSE termination after backend failure by @shaun0927 in #3262
- fix(my-tool): hide sensitive nested config fields in check output by @shaun0927 in #3261
- fix(utils): extract PPTX table cells and grouped shape text (#3250) by @04cb in #3269
- fix(loop): persist subagent follow-up events in session history by @xzq-xu in #3242
- fix(agent): stop leaking provider retry heartbeats to user channels by @SamZhu19921116 in #3229
- feat(websocket): multiplex multiple chat_ids over a single connection by @Re-bin in #3272
- refactor(templates): separate identity and SOUL responsibilities by @chengyongru in #3275
- feat(wizard): add Channel Common, API Server menus and constraint validation by @chengyongru in #3273
- feat: add issue templates by @chengyongru in #3287
- fix: prevent GitStore from creating nested repos and overwriting .gitignore by @longle325 in #3289
- fix: harden cron tool contract by @yeyitech in #3125
- fix(config): return provider default api base in config resolution by @morandot in #3112
- fix(cli): respect sys.stdout.isatty() in stream renderer (#3265) by @pixan-ai in #3271
- docs: refactor README into a docs-first landing page by @Re-bin in #3306
- fix: unify summary injection strategy between token consolidation and auto-compact by @JiajunBernoulli in #3304
- fix(providers): add circuit breaker for Responses API fallback by @chengyongru in #3302
- feat(webui): add initial browser UI with websocket chat and i18n by @Re-bin in #3310
- fix(cron): drop top-level oneOf so OpenAI Codex/Responses accept tool schema by @coldxiangyu163 in #3295
- fix(discord): allow bot-to-bot messaging, only drop self-loops (#3217) by @pixan-ai in #3280
- fix(session): prevent data loss with atomic writes and corrupt-file repair by @aiguozhi123456 in #3312
- style(webui): improve typography and code block rendering by @chengyongru in #3314
- fix: suppress intermediate progress output in cron jobs by @chengyongru in #3320
- fix(agent): align subagent result session key with main agent for mid-turn injection by @chengyongru in #3321
- fix(email): deduplicate SPF/DKIM-rejected emails to stop log spam by @chengyongru in #3325
- feat(telegram): mid-stream split in send_delta by @chengyongru in #3329
- fix(utils): strip malformed think tags and harmony channel markers by @hlgone in #3327
- fix(loop): preserve partial context when /stop cancels a task by @hussein1362 in #3299
- fix(anthropic): strip trailing assistant messages to prevent prefill error by @hussein1362 in #3297
- fix(mcp): retry once on transient connection errors by @hussein1362 in #3338
- agent: use ContextVar for tool routing context by @chengyongru in #3345
- fix(memory): harden cursor recovery against non-integer corruption by @MuataSr in #3340
- fix(retry): recognize ZhiPu 1302 rate-limit error for retry by @chengyongru in #3356
- fix(telegram): improve markdown rendering for modern LLM output by @hussein1362 in #3355
- fix(commands): intercept non-priority commands during active turn by @chengyongru in #3359
- fix(utils/document): use try/finally in _extract_xlsx to ensure workbook is always closed by @XJPeng12 in #3353
- feat(read_file): add DOCX, XLSX, PPTX office document support by @aiguozhi123456 in #3336
- Fix/minimax reasoning split by @lahuman in #3363
New Contributors
- @dzydzydzy7 made their first contribution in #3161
- @Aisht669 made their first contribution in #3160
- @sohamb117 made their first contribution in #3186
- @Lbin91 made their first contribution in #3171
- @mcampo made their first contribution in #3238
- @sicnuyudidi made their first contribution in #3163
- @SamZhu19921116 made their first contribution in #3229
- @longle325 made their first contribution in #3289
- @hlgone made their first contribution in #3327
- @hussein1362 made their first contribution in #3299
- @MuataSr made their first contribution in #3340
- @lahuman made their first contribution in #3363
Full Changelog: v0.1.5.post1...v0.1.5.post2