Minor Changes
-
#240
ef782ffThanks @eyaltoledano! - feat(expand): Enhanceexpandandexpand-allcommands- Integrate
task-complexity-report.jsonto automatically determine the number of subtasks and use tailored prompts for expansion based on prior analysis. You no longer need to try copy-pasting the recommended prompt. If it exists, it will use it for you. You can just runtask-master update --id=[id of task] --researchand it will use that prompt automatically. No extra prompt needed. - Change default behavior to append new subtasks to existing ones. Use the
--forceflag to clear existing subtasks before expanding. This is helpful if you need to add more subtasks to a task but you want to do it by the batch from a given prompt. Use force if you want to start fresh with a task's subtasks.
- Integrate
-
#240
87d97bbThanks @eyaltoledano! - Adds support for the OpenRouter AI provider. Users can now configure models available through OpenRouter (requiring anOPENROUTER_API_KEY) via thetask-master modelscommand, granting access to a wide range of additional LLMs. - IMPORTANT FYI ABOUT OPENROUTER: Taskmaster relies on AI SDK, which itself relies on tool use. It looks like free models sometimes do not include tool use. For example, Gemini 2.5 pro (free) failed via OpenRouter (no tool use) but worked fine on the paid version of the model. Custom model support for Open Router is considered experimental and likely will not be further improved for some time. -
#240
1ab836fThanks @eyaltoledano! - Adds model management and new configuration file .taskmasterconfig which houses the models used for main, research and fallback. Adds models command and setter flags. Adds a --setup flag with an interactive setup. We should be calling this during init. Shows a table of active and available models when models is called without flags. Includes SWE scores and token costs, which are manually entered into the supported_models.json, the new place where models are defined for support. Config-manager.js is the core module responsible for managing the new config." -
#240
c8722b0Thanks @eyaltoledano! - Adds custom model ID support for Ollama and OpenRouter providers.- Adds the
--ollamaand--openrouterflags totask-master models --set-<role>command to set models for those providers outside of the support models list. - Updated
task-master models --setupinteractive mode with options to explicitly enter custom Ollama or OpenRouter model IDs. - Implemented live validation against OpenRouter API (
/api/v1/models) when setting a custom OpenRouter model ID (via flag or setup). - Refined logic to prioritize explicit provider flags/choices over internal model list lookups in case of ID conflicts.
- Added warnings when setting custom/unvalidated models.
- We obviously don't recommend going with a custom, unproven model. If you do and find performance is good, please let us know so we can add it to the list of supported models.
- Adds the
-
#240
2517bc1Thanks @eyaltoledano! - Integrate OpenAI as a new AI provider. - Enhancemodelscommand/tool to display API key status. - Implement model-specificmaxTokensoverride based onsupported-models.jsonto save you if you use an incorrect max token value. -
#240
9a48278Thanks @eyaltoledano! - Tweaks Perplexity AI calls for research mode to max out input tokens and get day-fresh information - Forces temp at 0.1 for highly deterministic output, no variations - Adds a system prompt to further improve the output - Correctly uses the maximum input tokens (8,719, used 8,700) for perplexity - Specificies to use a high degree of research across the web - Specifies to use information that is as fresh as today; this support stuff like capturing brand new announcements like new GPT models and being able to query for those in research. 🔥
Patch Changes
-
#240
842eaf7Thanks @eyaltoledano! - - Add support for Google Gemini models via Vercel AI SDK integration. -
#240
ed79d4fThanks @eyaltoledano! - Add xAI provider and Grok models support -
#378
ad89253Thanks @eyaltoledano! - Better support for file paths on Windows, Linux & WSL.- Standardizes handling of different path formats (URI encoded, Windows, Linux, WSL).
- Ensures tools receive a clean, absolute path suitable for the server OS.
- Simplifies tool implementation by centralizing normalization logic.
-
#285
2acba94Thanks @neno-is-ooo! - Add integration for Roo Code -
#378
d63964aThanks @eyaltoledano! - Improved update-subtask - Now it has context about the parent task details - It also has context about the subtask before it and the subtask after it (if they exist) - Not passing all subtasks to stay token efficient -
#240
5f504faThanks @eyaltoledano! - Improve and adjustinitcommand for robustness and updated dependencies.- Update Initialization Dependencies: Ensure newly initialized projects (
task-master init) include all required AI SDK dependencies (@ai-sdk/*,ai, provider wrappers) in theirpackage.jsonfor out-of-the-box AI feature compatibility. Remove unnecessary dependencies (e.g.,uuid) from the init template. - Silence
npm installduringinit: Preventnpm installoutput from interfering with non-interactive/MCP initialization by suppressing its stdio in silent mode. - Improve Conditional Model Setup: Reliably skip interactive
models --setupduring non-interactiveinitruns (e.g.,init -yor MCP) by checkingisSilentMode()instead of passing flags. - Refactor
init.js: Remove internalisInteractiveflag logic. - Update
initInstructions: Tweak the "Getting Started" text displayed afterinit. - Fix MCP Server Launch: Update
.cursor/mcp.jsontemplate to usenode ./mcp-server/server.jsinstead ofnpx task-master-mcp. - Update Default Model: Change the default main model in the
.taskmasterconfigtemplate.
- Update Initialization Dependencies: Ensure newly initialized projects (
-
#240
96aeeffThanks @eyaltoledano! - Fixes an issue with add-task which did not use the manually defined properties and still needlessly hit the AI endpoint. -
#240
5aea93dThanks @eyaltoledano! - Fixes an issue that prevented remove-subtask with comma separated tasks/subtasks from being deleted (only the first ID was being deleted). Closes #140 -
#240
66ac9abThanks @eyaltoledano! - Improves next command to be subtask-aware - The logic for determining the "next task" (findNextTask function, used by task-master next and the next_task MCP tool) has been significantly improved. Previously, it only considered top-level tasks, making its recommendation less useful when a parent task containing subtasks was already marked 'in-progress'. - The updated logic now prioritizes finding the next available subtask within any 'in-progress' parent task, considering subtask dependencies and priority. - If no suitable subtask is found within active parent tasks, it falls back to recommending the next eligible top-level task based on the original criteria (status, dependencies, priority).This change makes the next command much more relevant and helpful during the implementation phase of complex tasks.
-
#240
ca7b045Thanks @eyaltoledano! - Add--statusflag toshowcommand to filter displayed subtasks. -
#328
5a2371bThanks @knoxgraeme! - Fix --task to --num-tasks in ui + related tests - issue #324 -
#240
6cb213eThanks @eyaltoledano! - Adds a 'models' CLI and MCP command to get the current model configuration, available models, and gives the ability to set main/research/fallback models." - In the CLI,task-master modelsshows the current models config. Using the--setupflag launches an interactive set up that allows you to easily select the models you want to use for each of the three roles. Useqduring the interactive setup to cancel the setup. - In the MCP, responses are simplified in RESTful format (instead of the full CLI output). The agent can use themodelstool with different arguments, includinglistAvailableModelsto get available models. Run without arguments, it will return the current configuration. Arguments are available to set the model for each of the three roles. This allows you to manage Taskmaster AI providers and models directly from either the CLI or MCP or both. - Updated the CLI help menu when you runtask-masterto include missing commands and .taskmasterconfig information. - Adds--researchflag toadd-taskso you can hit up Perplexity right from the add-task flow, rather than having to add a task and then update it.