Minor Changes
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add support for custom OpenAI-compatible providers, allowing you to connect Task Master to any service that implements the OpenAI API specificationHow to use:
Configure your custom provider with the
modelscommand:task-master models --set-main <your-model-id> --openai-compatible --baseURL <your-api-endpoint>
Example:
task-master models --set-main llama-3-70b --openai-compatible --baseURL http://localhost:8000/v1 # Or for an interactive view task-master models --setupSet your API key (if required by your provider) in mcp.json, your .env file or in your env exports:
OPENAI_COMPATIBLE_API_KEY="your-key-here"This gives you the flexibility to use virtually any LLM service with Task Master, whether it's self-hosted, a specialized provider, or a custom inference server.
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add native support for Z.ai (GLM models), giving you access to high-performance Chinese models including glm-4.6 with massive 200K+ token context windows at competitive pricingHow to use:
-
Get your Z.ai API key from https://z.ai/manage-apikey/apikey-list
-
Set your API key in .env, mcp.json or in env exports:
ZAI_API_KEY="your-key-here" -
Configure Task Master to use GLM models:
task-master models --set-main glm-4.6 # Or for an interactive view task-master models --setup
Available models:
glm-4.6- Latest model with 200K+ context, excellent for complex projectsglm-4.5- Previous generation, still highly capable- Additional GLM variants for different use cases:
glm-4.5-air,glm-4.5v
GLM models offer strong performance on software engineering tasks, with particularly good results on code generation and technical reasoning. The large context window makes them ideal for analyzing entire codebases or working with extensive documentation.
-
-
#1360
819d5e1Thanks @Crunchyman-ralph! - Add LM Studio integration, enabling you to run Task Master completely offline with local models at zero API cost.How to use:
-
Download and install LM Studio
-
Launch LM Studio and download a model (e.g., Llama 3.2, Mistral, Qwen)
-
Optional: Add api key to mcp.json or .env (LMSTUDIO_API_KEY)
-
Go to the "Local Server" tab and click "Start Server"
-
Configure Task Master:
task-master models --set-main <model-name> --lmstudio
Example:
task-master models --set-main llama-3.2-3b --lmstudio
-
Patch Changes
-
#1362
3e70edfThanks @Crunchyman-ralph! - Improve parse PRD schema for better llm model compatiblity- Fixes #1353
-
#1358
0c639bdThanks @Crunchyman-ralph! - Fix subtask ID display to show full compound notationWhen displaying a subtask via
tm show 104.1, the header and properties table showed only the subtask's local ID (e.g., "1") instead of the full compound ID (e.g., "104.1"). The CLI now preserves and displays the original requested task ID throughout the display chain, ensuring subtasks are clearly identified with their parent context. Also improved TypeScript typing by using discriminated unions for Task/Subtask returns fromtasks.get(), eliminating unsafe type coercions. -
#1339
3b09b5dThanks @Crunchyman-ralph! - Fixed MCP server sometimes crashing when getting into the commit step of autopilot- autopilot now persists state consistently through the whole flow
-
#1326
9d5812bThanks @SharifMrCreed! - Improve gemini cli integrationWhen initializing Task Master with the
geminiprofile, you now get properly configured context files tailored specifically for Gemini CLI, including MCP configuration and Gemini-specific features like file references, session management, and headless mode.