๐ง DeepTutor v0.4.1 Release Notes
Release Date: 2026.01.09
A maintenance release focused on LLM Provider system optimization, Question Generation robustness, and Docker deployment fixes.
โจ Highlights
๐ LLM Provider System Overhaul
Completely redesigned LLM provider management with persistent configuration:
Three Deployment Modes (LLM_MODE env var):
| Mode | Description |
|---|---|
hybrid (default)
| Use active provider if available, else env config |
api
| Cloud API providers only (OpenAI, Anthropic, etc.) |
local
| Local/self-hosted only (Ollama, LM Studio, etc.) |
Provider Presets for quick setup:
# API Providers
API_PROVIDER_PRESETS = {
"openai": {"base_url": "https://api.openai.com/v1", "requires_key": True},
"anthropic": {"base_url": "https://api.anthropic.com/v1", "requires_key": True},
"deepseek": {"base_url": "https://api.deepseek.com", "requires_key": True},
"openrouter": {"base_url": "https://openrouter.ai/api/v1", "requires_key": True},
}
# Local Providers
LOCAL_PROVIDER_PRESETS = {
"ollama": {"base_url": "http://localhost:11434/v1", "requires_key": False},
"lm_studio": {"base_url": "http://localhost:1234/v1", "requires_key": False},
"vllm": {"base_url": "http://localhost:8000/v1", "requires_key": False},
"llama_cpp": {"base_url": "http://localhost:8080/v1", "requires_key": False},
}New API Endpoints:
GET /api/llm-providers/mode/- Get current LLM mode infoGET /api/llm-providers/presets/- Get provider presetsPOST /api/llm-providers/test/- Test provider connection
๐ก๏ธ Question Generation Robustness (PR #81)
Enhanced JSON parsing for LLM responses:
- Added
_extract_json_from_markdown()to handle\``json ... ```` wrapped responses - Comprehensive error handling with detailed logging
- Graceful fallbacks when LLM returns invalid JSON
๐ณ Docker Deployment Fixes
- Fixed frontend startup script for proper
NEXT_PUBLIC_API_BASEinjection - Improved supervisor configuration for better service management
- Environment variable handling improvements
๐งน Codebase Cleanup
Removed src/core module - All functionality migrated to src/services:
| Old Import | New Import |
|---|---|
from src.core.core import load_config_with_main
| from src.services.config import load_config_with_main
|
from src.core.llm_factory import llm_complete
| from src.services.llm import complete
|
from src.core.prompt_manager import get_prompt_manager
| from src.services.prompt import get_prompt_manager
|
from src.core.logging import get_logger
| from src.logging import get_logger
|
๐ฆ What's Changed
- Merge pull request #81 from tusharkhatriofficial/fix/question-generation-json-parsing
- fix: Add comprehensive error handling and JSON parsing for question generation
- fix: llm providers, frontend
- fix: docker deployment
Full Changelog: v0.4.0...v0.4.1