Use any AI provider — 15+ API keys supported, zero-key CLI tools, local models, or let Hamster handle everything. Taskmaster adapts to how you work.
AI Roles
Assign different models to different roles for optimal cost and performance. Main handles complex reasoning, Research queries live sources, and Fallback ensures reliability during rate limits.
Primary model for task generation, updates, and analysis. Best for complex reasoning.
--set-main claude-sonnet-4Used with --research flag. Best for real-time web search and fresh information.
--set-research sonar-proAutomatic failover if main provider fails. Ensures reliability during rate limits.
--set-fallback gpt-4o-miniProviders
Store keys in .env or configure via tm models --setup. Use the provider that works best for you — Anthropic, OpenAI, Google, Perplexity, and more.
Claude 4.5 Opus, Sonnet, Haiku
GPT-5, GPT-4.5, o3, o4-mini
Gemini 3 Pro, 2.5 Pro/Flash
Sonar Pro, Sonar Reasoning
Llama 4, Kimi K2, Mixtral
Grok 3, Grok 4
50+ models from all providers
Mistral Large, Codestral
Zero Config
Already paying for Claude Max, ChatGPT Plus, or Gemini Code Assist? Use those subscriptions directly — no separate API key needed.
Use your existing Claude subscription via OAuth
task-master models --set-main sonnet --claude-codeLeverage Google OAuth with free or paid tiers
task-master models --set-main gemini-3-pro --gemini-cliAccess GPT-5 through your ChatGPT subscription
task-master models --set-main gpt-5-codex --codex-cliUse Grok models via X Premium subscription
task-master models --set-main grok-4-latest --grok-cliPrivacy
Use Ollama or LM Studio to run models entirely on your machine. No API key, no internet required, no data leaves your device. Enterprise teams can use AWS Bedrock, Azure OpenAI, or Google Vertex AI.
qwen3:32bExcellent for code generationllama3.3:latestGeneral purpose, fastdevstral:latestOptimized for developmentcodellama:34bCode-specializedKey capabilities
Works with
MCP Server — Configure which provider your MCP server uses.
Research — Research role powers Perplexity queries.
Hamster Studio — Or let Hamster handle AI configuration entirely.
Run tm models --setup to get started.