DeepTutor v1.3.9 Release Notes
Release Date: 2026.05.09
v1.3.9 builds on the v1.3.8 multi-user foundation with broader TutorBot
deployment options, safer provider routing for thinking models, and a smoother
web onboarding path. It adds Zulip and NVIDIA NIM support, improves startup
ergonomics, and folds in the main issue fixes reported after the last release.
Highlights
TutorBot Channel and Provider Expansion
- Zulip is now a TutorBot channel - bots can listen to private messages and
stream topics, enforceallow_from, choose mention-only or open stream
replies, and bridge Zulip's event queue into the async TutorBot bus. - Math and files work better in Zulip - LaTeX is converted to Zulip-friendly
KaTeX markup, upload/download calls use configurable retry behavior, and
attachment filenames include upload-path digests to avoid collisions. - Zulip topics keep conversations separated - stream topics now become part
of the chat/session key, with a stable(no topic)fallback for empty topics. - TutorBot supports NVIDIA NIM -
nvidia_nimis available in TutorBot
provider config and registry detection, including NIM's streaming behavior
that omits unsupportedstream_options.
Model and Runtime Reliability
- Configured context windows are respected - the safety ceiling is raised to
1,000,000 tokens while the large-model fallback remains 65,536, so explicit
128K-style model settings are no longer silently clamped. - Qwen vision detection is fixed - Qwen VL models are treated as
vision-capable across DashScope, OpenAI-compatible, and custom bindings. - Minimal thinking mode is provider-safe - DeepSeek, DashScope, VolcEngine,
BytePlus, and MiniMax no longer receive a rejected top-level
reasoning_effort=minimal; DeepTutor sends the provider-specific disable
signal instead. - DeepSeek v4 costs are tracked - research token accounting includes
deepseek-v4-flashanddeepseek-v4-propricing entries.
Web and CLI Polish
deeptutor startlaunches the full web stack - the CLI now delegates to
scripts/start_web.pyso backend and frontend can be started from one
command, and launcher failures propagate through the CLI exit code.- Sidebar onboarding is clearer - primary navigation icons now expose
scoped, localized tooltips with descriptions and keyboard focus support. - Multi-line user messages stay readable - chat message rendering preserves
Shift+Enter line breaks, fixing code blocks and structured prompts that were
previously collapsed into one line. - Assigned resources are easier to understand - model-selection summaries
and read-only knowledge-base actions now present clearer labels for
non-admin, grant-scoped sessions.
Multi-User and Session Store Parity
- Assigned model options match the selector contract - non-admin LLM choices
now return profile names, model names, labels, and active/default metadata in
the same shape expected by the web model selector. - PocketBase sessions support more chat flows - message metadata can be
persisted, last-message lookup is available, and message deletion works with
PocketBase string IDs as well as SQLite integer IDs. - Regenerate remains storage-neutral - turn retry logic can remove the last
assistant message without assuming the backing session store uses integer
primary keys.
Tests
- Added Zulip channel coverage for config parsing, permission checks, duplicate
filtering, mentions, stream topic scoping, attachment extraction, retry
behavior, LaTeX conversion, typing status, sending, uploads, and startup
failures. - Added TutorBot NVIDIA NIM provider tests for registry detection, schema
acceptance, and streaming request compatibility. - Added LLM regression tests for Qwen vision capability, explicit context-window
budgets, and minimal-thinking provider kwargs. - Added CLI coverage so
deeptutor startpropagates the launcher exit code. - Added research token-pricing coverage for the DeepSeek v4 model entries.
Upgrade Notes
- Install or refresh the
.[tutorbot]extra, orrequirements/tutorbot.txt, to
include the newzulip>=0.8.0,<1.0.0dependency before enabling Zulip bots. - Configure Zulip bots with
site,email,apiKey,allowFrom, and
groupPolicy; usementionfor safer stream deployments andopenonly
when every stream message should reach the bot. - If you use
LLM_REASONING_EFFORT=minimalwith DeepSeek, DashScope,
VolcEngine, BytePlus, or MiniMax, keep the setting as-is; v1.3.9 translates it
to the correct provider-specific disable payload. - Large configured context windows may now be honored instead of capped at
65,536 tokens, so verify provider limits and expected prompt-cost behavior. - Optional PocketBase deployments should ensure the
messagescollection has a
metadata_jsonJSON field before relying on regenerate/session metadata
parity.
What's Changed
- fix: raise context_window ceiling and add qwen vision support by @wedone in #442
- fix: add deepseek-v4-flash and deepseek-v4-pro to model pricing table by @Starfie1d1272 in #447
- fix(llm): stop sending reasoning_effort=minimal as top-level param to providers that reject it by @Starfie1d1272 in #453
- feat: add deeptutor start command to launch backend and frontend together by @Starfie1d1272 in #445
- fix(web): preserve newlines in user chat messages by @kagura-agent in #449
- feat(tutorbot): add Zulip channel support by @wedone in #452
- feat: tooltips for sidebar by @philliplagoc in #457
- fix: add TutorBot NVIDIA NIM provider support by @Bortlesboat in #455
New Contributors
- @philliplagoc made their first contribution in #457
- @Bortlesboat made their first contribution in #455
Full Changelog: v1.3.8...v1.3.9