github nyldn/claude-octopus v8.10.0

latest releases: v9.19.2, v9.19.1, v9.19.0...
one month ago

Fixed

Gemini CLI Headless Mode (P0 - Issue #25):

  • Fixed Gemini CLI launching in interactive REPL mode instead of headless mode
  • Added -p "" flag for reliable headless execution (prompt via stdin)
  • Added -o text flag for clean text output (no ANSI escapes, no interactive UI)
  • Replaced deprecated -y flag with --approval-mode yolo
  • Removed invalid --pipe flag from pipe-mode option (flag doesn't exist in Gemini CLI v0.28+)

Stdin-Based Prompt Delivery (P0 - Issue #25):

  • Changed from positional argument to stdin pipe for Gemini prompts in run_agent_sync() and spawn_agent()
  • Pattern: printf '%s' "$prompt" | gemini -p "" -o text --approval-mode yolo
  • Eliminates OS argument length limits (~128KB Linux, ~256KB macOS)
  • Prevents large prompts with injected skills/memory from crashing

Context Budget Enforcement Ordering (P1 - Issue #25):

  • Moved enforce_context_budget() call in spawn_agent() to AFTER skill and memory context injection
  • Previously called before injections, allowing final prompt to exceed configured budget

Changed

Gemini Sandbox Modes (P1):

  • Default changed from prompt-mode to headless (OCTOPUS_GEMINI_SANDBOX)
  • auto-accept kept as backward-compatible alias for headless
  • prompt-mode kept as backward-compatible alias for interactive
  • Removed broken pipe-mode (referenced non-existent --pipe flag)

Debate Skill (P1):

  • Updated direct Gemini invocation in skill-debate.md to use stdin-based prompt delivery
  • Changed from gemini -y "${QUESTION}" to printf '%s' "${QUESTION}" | gemini -p "" -o text --approval-mode yolo

[8.9.0] - 2026-02-13

Added

Contextual Codex Model Routing (P0):

  • select_codex_model_for_context() function - automatically selects the best Codex model based on workflow phase, task type, and user config
  • get_codex_agent_for_phase() helper - maps phases to appropriate codex-* agent types
  • Per-phase model routing in providers.json phase_routing section
  • 5-tier model precedence: env var > task hints > phase routing > config defaults > hard-coded

New Agent Types (P0):

  • codex-spark agent type - routes to GPT-5.3-Codex-Spark (1000+ tok/s, 15x faster, 128K context)
  • codex-reasoning agent type - routes to o3 (deep reasoning, 200K context)
  • codex-large-context agent type - routes to gpt-4.1 (1M context window)
  • All new types integrated into get_agent_command(), get_agent_command_array(), AVAILABLE_AGENTS, is_agent_available_v2(), get_fallback_agent()

New Agent Personas (P1):

  • codebase-analyst - large-context agent using gpt-4.1 for analyzing entire codebases
  • reasoning-analyst - deep reasoning agent using o3 for complex trade-off analysis

Enhanced Model Support (P0):

  • GPT-5.3-Codex-Spark pricing and model entry
  • o3 and o4-mini reasoning model entries
  • gpt-4.1 and gpt-4.1-mini large-context model entries (1M token window)
  • gpt-5.1 and gpt-5-codex legacy model entries

Changed

Updated API Pricing (P0):

  • Corrected gpt-5.3-codex to $1.75/$14.00 per MTok (was $4.00/$16.00)
  • Corrected gpt-5.2-codex to $1.75/$14.00 per MTok (was $2.00/$10.00)
  • Corrected gpt-5.1-codex-mini to $0.30/$1.25 per MTok (was $0.50/$2.00)
  • Added 6 new model price entries (spark, o3, o4-mini, gpt-4.1, gpt-4.1-mini, gpt-5.1)

Agent Config v2.0 (P1):

  • agents/config.yaml upgraded to version 2.0
  • Added phase_model_routing section with per-phase Codex model defaults
  • Added fallback_cli field for graceful agent degradation
  • Code reviewer switched to codex-spark for fast PR feedback (15x faster)
  • Performance engineer switched to codex-spark for rapid analysis
  • Security auditor stays on full gpt-5.3-codex for thorough analysis

Model Config Command v2.0 (P1):

  • model-config.md rewritten for v2.0 with comprehensive model catalog
  • Added phase <phase> <model> subcommand for per-phase routing
  • Added reset phases subcommand
  • Full Spark vs Codex comparison table
  • Pricing table for all 12+ supported models

Config Schema v2.0 (P1):

  • providers.json schema upgraded to v2.0
  • Added phase_routing section (9 phase-to-model mappings)
  • Added spark_model, mini_model, reasoning_model, large_context_model fields to codex provider
  • Backward compatible with v1.0 configs

Fallback Chains (P1):

  • Extended get_fallback_agent() with codex-spark → codex → gemini chain
  • Extended with codex-reasoning → codex → gemini chain
  • Extended with codex-large-context → codex → gemini chain
  • Spark falls back gracefully to standard Codex when unavailable

Cost Awareness (P2):

  • Updated CLAUDE.md cost estimates to reflect Feb 2026 API pricing
  • Cost range updated from ~$0.02-0.10 to ~$0.01-0.15 per query

Tier Model Selection (P1):

  • get_tier_model() extended with codex-spark (always spark), codex-reasoning (o4-mini/o3), codex-large-context (gpt-4.1-mini/gpt-4.1) tiers

Don't miss a new claude-octopus release

NewReleases is sending notifications on new releases.