github cloudposse/atmos v1.209.0

latest releases: v1.210.0-test.0, v1.208.1-test.24
pre-release6 hours ago
fix: prevent shell injection in GitHub Actions workflows @[copilot-swe-agent[bot]](https://github.com/apps/copilot-swe-agent) (#2160) GitHub Actions expands `${{ ... }}` expressions before the shell runs, making direct interpolation in `run:` blocks a script injection vector. User-controlled values (file paths from `git diff`, PR metadata) embedded inline can break out of shell context.

Changes

  • changelog-check.yml — Move all ${{ ... }} expressions out of run: scripts into env: blocks. Critical case: BLOG_FILES was set by directly interpolating steps.check-blog.outputs.blog_files (user-controlled filenames from git diff); a file named website/blog/"; EVIL;".mdx would execute arbitrary commands.
  • vhs.yaml — Same pattern: steps.tapes.outputs.files was interpolated directly into a shell pipeline (echo -n ${{ steps.tapes.outputs.files }} | jq ...). Moved to env: + quoted "$TAPE_FILES". Also fixed IS_LABELED, PR_NUMBER, GH_REPO, and COMMIT_HASH.
  • website-preview-destroy.yml — Replace deprecated ##[set-output name=...] with >> $GITHUB_OUTPUT.

Before / After

# Before — unsafe: GitHub expands this before the shell sees it
run: |
  BLOG_FILES="${{ steps.check-blog.outputs.blog_files }}"

# After — safe: value arrives as an environment variable, never parsed by shell
env:
  BLOG_FILES: ${{ steps.check-blog.outputs.blog_files }}
run: |
  echo "$BLOG_FILES"

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

feat: implement dependency order execution for terraform --all flag @osterman (#1516) ## what - Implement dependency-ordered execution for `atmos terraform apply --all` command - Create a reusable dependency graph package that both `--all` and `--affected` flags can use - Ensure Terraform components are always processed in the correct order based on their dependencies

why

  • The --all flag was processing components without respecting dependency order, which could lead to deployment failures
  • The dependency logic was tightly coupled with the --affected functionality and needed to be generalized
  • Users need a reliable way to deploy all components while respecting inter-component dependencies

Key Features

  • ✅ Dependency order execution for --all flag
  • ✅ Circular dependency detection with clear error messages
  • ✅ Support for cross-stack dependencies
  • ✅ Filtering by stack, components, and YQ queries
  • ✅ Skipping of abstract and disabled components
  • ✅ Dry-run mode support
  • ✅ Reusable graph logic for both --all and --affected

Implementation Details

New Dependency Graph Package (pkg/dependency/)

Created a generalized, reusable dependency graph implementation:

  • graph.go - Core graph structure and operations
  • builder.go - Builder pattern for safe graph construction
  • sort.go - Topological sort using Kahn's algorithm
  • filter.go - Graph filtering operations
  • types.go - Core types and interfaces

Terraform Execution Updates

  • terraform_all.go - New ExecuteTerraformAll function for --all flag
  • terraform_affected_graph.go - Refactored ExecuteTerraformAffected to use the graph
  • terraform_executor.go - Shared execution logic for processing nodes

Testing

  • Comprehensive unit tests for the dependency graph package
  • Integration tests for terraform execution
  • Test fixtures with complex dependency scenarios
  • All existing tests continue to pass

Example

Given this configuration:

components:
  terraform:
    vpc:
      vars:
        cidr: "10.0.0.0/16"
    
    database:
      settings:
        depends_on:
          - component: vpc
    
    application:
      settings:
        depends_on:
          - component: database

Running atmos terraform apply --all will execute in order:

  1. vpc (no dependencies)
  2. database (depends on vpc)
  3. application (depends on database)

Testing

  • Run unit tests: go test ./pkg/dependency/...
  • Run integration tests: go test ./internal/exec -run TestBuildTerraformDependencyGraph
  • Test with fixture: atmos terraform plan --all --dry-run in tests/fixtures/scenarios/terraform-apply-all-dependencies/

references

Summary by CodeRabbit

  • New Features
    • Terraform commands (--all, --plan, --destroy, affected) now run components in dependency (topological) order with support for cross-stack relationships, filtering, and reverse-order destroy.
  • Documentation
    • Added a PRD and blog post describing dependency-ordered execution, examples, and usage guidance.
  • Tests
    • Extensive unit and integration tests and fixtures added to validate graph construction, filtering, ordering, and execution scenarios.

✏️ Tip: You can customize this high-level summary in your review settings.

Introducing Atmos AI @aknysh (#2030) ## Summary

Atmos integrates AI directly into infrastructure management, providing intelligent assistance for understanding configurations, analyzing stacks, troubleshooting issues, and automating workflows. This PR introduces Atmos AI, MCP Server, and LSP integration with comprehensive documentation, CLI commands, and examples.

AI Assistant (atmos ai)

  • atmos ai chat — Interactive TUI chat with tool calling, sessions, skills, and multi-provider support
  • atmos ai ask — Single-question CLI for quick answers
  • atmos ai exec — Execute Atmos/shell commands via natural language prompts
  • atmos ai sessions — Manage persistent conversation sessions (list, clean, export, import)
  • atmos ai skill — Install/list/uninstall marketplace skills, refer to agentskills.io
  • 7 AI providers: Anthropic, OpenAI, Gemini, Grok, Ollama, AWS Bedrock, Azure OpenAI
  • 21+ agent skills following the SKILL.md/AGENTS.md open standards
  • Token caching for cost optimization across providers
  • Auto-compact sessions — AI-powered conversation summarization

MCP Server (atmos mcp start)

  • Expose Atmos tools to any MCP-compatible client (Claude Desktop, VS Code, Cursor, Windsurf, etc.)
  • 10 client configuration guides
  • stdio and HTTP transport support
  • 8+ available tools with permission controls

LSP Integration (atmos lsp start)

  • LSP Server — IDE integration with autocomplete, hover docs, and diagnostics for Atmos stack files
  • LSP Client — Connects to external language servers (yaml-ls, terraform-ls) for AI-powered file validation
  • Supports 11 editors (VS Code, Neovim, Zed, Helix, Sublime Text, Emacs, IntelliJ, Vim, Kate, Lapce, Nova)
  • stdio, TCP, and WebSocket transport modes

Documentation

  • Atmos AI Landing Page
  • CLI configuration reference pages for AI, providers, sessions, skills, tools, instructions, MCP, and LSP
  • CLI command reference pages for all new commands
  • Conceptual guides: AI overview, MCP server, Claude Code integration, troubleshooting
  • LSP server and client guides with editor-specific setup instructions
  • Blog posts: Introducing Atmos AI, Introducing Atmos LSP, AI Agent Skills
  • AI assistants setup guide (Claude Code, OpenAI Codex, Gemini CLI, Cursor, Windsurf, GitHub Copilot)
  • Updated roadmap with AI/MCP/LSP milestones

Examples

  • examples/ai/ — Complete AI configuration example with multi-provider setup, sessions, tools, and custom skills

See It in Action

$ atmos ai ask "what stacks and components do we have in the infra?"

👽 Thinking...

  I'll list all the available stacks and components in your infrastructure.

  Based on the infrastructure configuration, here are the stacks and components:

  ## Stacks (4 total)

  1. **ue1-network** - Network stack in us-east-1
  2. **ue1-prod** - Production stack in us-east-1
  3. **uw2-network** - Network stack in us-west-2
  4. **uw2-prod** - Production stack in us-west-2

  ## Components by Stack

  ### ue1-network

  • tgw/attachment - Transit Gateway attachment
  • tgw/hub - Transit Gateway hub
  • vpc - Virtual Private Cloud

  ### ue1-prod

  • tgw/attachment - Transit Gateway attachment
  • vpc - Virtual Private Cloud

  ### uw2-network

  • tgw/attachment - Transit Gateway attachment
  • tgw/cross-region-hub-connector - Cross-region Transit Gateway connector
  • vpc - Virtual Private Cloud

  ### uw2-prod

  • tgw/attachment - Transit Gateway attachment
  • vpc - Virtual Private Cloud

  ## Summary

  Your infrastructure is organized across two AWS regions (us-east-1 and us-west-2),
  with dedicated network and production stacks in each region. The network stacks
  contain Transit Gateway hubs for network connectivity, while the production stacks
  use Transit Gateway attachments to connect to the network infrastructure. All stacks
  include VPC components for network isolation.
feat: Add source cache TTL for JIT-vendored components @osterman (#2138) ## Summary

Implement a ttl (time-to-live) field on component source configuration to control how long cached JIT-vendored sources are reused before re-pulling from the remote. This solves the problem of stale caches when using floating refs (branches) without requiring manual --force flags.

Key Feature: Declarative cache expiration policy. Set ttl: 0s for active development (always fresh), ttl: 1h for team collaboration (hourly refresh), or omit TTL for infinite cache (backward compatible).

Problem Solved

When JIT-vendored components use floating refs like version: "main", Atmos skips re-pulling because the version string in metadata hasn't changed—it's still "main" even though upstream content has. Developers must manually delete .workdir/ or run source pull --force.

Solution

Add optional ttl field to source configuration. When set, the source provisioner compares the workdir's update timestamp against the TTL. If expired, the source is re-pulled automatically.

# Per-component override (stack manifest)
components:
  terraform:
    my-module:
      source:
        uri: git::https://github.com/org/repo.git
        version: main
        ttl: "0s"  # Always re-pull

# Global default (atmos.yaml)
components:
  terraform:
    source:
      ttl: "1h"  # Re-pull if older than 1 hour

Changes

  • ✅ Add TTL field to VendorComponentSource schema
  • ✅ Add TerraformSourceSettings struct with global TTL default
  • ✅ Parse ttl from source maps in extract.go
  • ✅ Implement TTL expiration check in needsProvisioning()
  • ✅ Support per-component override and global defaults
  • ✅ Handle zero TTL explicitly (always expires)
  • ✅ Comprehensive unit tests for all TTL behaviors

Documentation

  • ✅ Updated terraform, helmfile, and packer source command docs with ttl field
  • ✅ Added "Cache TTL for Floating Refs" section to source-based versioning design pattern
  • ✅ Created PRD explaining problem, solution, and architecture (docs/prd/source-cache-ttl.md)
  • ✅ Created blog post with user-facing guidance (website/blog/2026-03-03-source-cache-ttl.mdx)
  • ✅ Updated roadmap with shipped milestone

Test Plan

  • Unit tests for TTL behavior: zero TTL, relative TTL with recent/old timestamps, no TTL
  • Integration tests for component sourcing with TTL
  • Tests for global TTL default merging
  • Code compiles: go build ./...
  • Tests pass: go test ./pkg/provisioner/source/...
  • Linting passes: make lint
  • Website builds: cd website && npm run build

Related

Fixes #2135

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Added source cache TTL for JIT‑vendored components. Configure per‑component or via a global default; "0s" forces re‑pull, other durations (e.g., "1h", "7d") trigger automatic refresh, unset = indefinite cache. Global default merges with per‑component TTL (per‑component wins).
  • Documentation

    • New docs, examples, a blog post, and CLI doc updates demonstrating TTL usage for Terraform, Helmfile, and Packer and recommended workflows.
  • Tests

    • Added unit tests for TTL parsing, invalid TTL handling, default merging, and TTL-driven provisioning decisions.
feat: Per-target version overrides in vendor manifests @osterman (#2141) ## what

Implement vendor targets as both strings and maps with optional version overrides, enabling multiple versions of the same component from a single source entry.

  • New AtmosVendorTarget type with custom YAML unmarshaling that accepts both string and map syntax
  • Per-target version re-resolution: When a target specifies its own version, the source URL template is re-resolved with that version
  • List vendor command support: Updated to show correct paths for per-target version overrides
  • JSON Schema update: Changed targets.items from single string type to oneOf supporting both syntaxes
  • 14 comprehensive tests: 9 unmarshal tests + 5 processTargets unit tests covering all scenarios
  • Blog post and roadmap: Announced feature and updated roadmap milestone

why

Vendor targets syntax was documented but never implemented—a hallucination from commit d549991. Previously, vendoring multiple versions of the same component required duplicating the entire source entry. Per-target overrides allow a single source definition to vendor multiple versions cleanly:

sources:
  - component: vpc
    source: "github.com/cloudposse/terraform-aws-vpc.git///?ref={{.Version}}"
    version: "2.1.0"
    targets:
      - "components/terraform/vpc"
      - path: "components/terraform/vpc/{{.Version}}"
        version: "3.0.0"

This is backward compatible—existing string-based targets continue working identically while new map syntax enables flexible per-target version management.

references

  • Documented syntax from: website/docs/design-patterns/version-management/vendoring-components.mdx lines 260-264
  • Follows Tasks/Task unmarshaling pattern from pkg/schema/task.go
  • All tests passing: schema unmarshal tests, processTargets unit tests, list vendor tests, vendor_utils tests

Summary by CodeRabbit

  • New Features

    • Per-target version overrides for vendor configs: targets can be strings or objects with path+optional version; per-target versions re-resolve source URLs, affect package naming, target path templates ({{.Version}}, {{.Component}}), and source locality classification.
  • Schema

    • Vendor targets schema now accepts either a string path or an object with path and optional version.
  • Documentation

    • Blog post and roadmap entry added with examples.
  • Tests

    • Extensive unit tests covering parsing, templating, and per-target resolution.
docs: Add PRD for browser-based auth in aws/user identity @Benbentwo (#1887) ## what

Add Product Requirements Document for enhancing the aws/user identity with browser-based OAuth2 authentication as a fallback credential source.

  • Extends existing aws/user identity (NOT a new provider type)
  • Three-tier credential resolution: YAML → Keychain → Browser webflow
  • Zero-config authentication using AWS console credentials
  • Supports headless/remote mode for servers without browsers
  • Native SDK implementation (no AWS CLI dependency)

why

DEV-3829 tracks the implementation of browser-based authentication. This approach:

  • Provides seamless fallback when no static credentials configured
  • Maintains backward compatibility with existing configurations
  • Eliminates need for long-term IAM access keys
  • Enables zero-config onboarding for new users

references

Summary by CodeRabbit

  • New Features

    • Browser-based OAuth2 (PKCE) fallback for aws/user enabling interactive and headless remote authentication when no static credentials are present.
  • Documentation

    • New product requirements and user-facing docs covering overview, user journeys, credential lifecycle, configuration examples, security, testing, rollout, and success metrics.
  • Roadmap

    • Planned milestone added for browser-based OAuth2 auth (Q1 2026).
docs: Add EKS kubeconfig authentication integration PRD @Benbentwo (#1884) ## what
  • Add comprehensive PRD for EKS kubeconfig authentication integration
  • Define integration pattern following ECR PR #1859
  • Document configuration schema, AWS SDK usage, and CLI command design
  • Enhance existing atmos aws eks update-kubeconfig command (not create new atmos auth command)

why

  • Establishes design before implementation to ensure alignment with ECR integration pattern
  • Provides reference for implementation of dependent components (EKS integration, kubeconfig manager)
  • Uses existing atmos aws command namespace to avoid leaking AWS-specific commands into atmos auth
  • Enables parallel development once ECR PR #1859 is merged

references

Summary by CodeRabbit

  • Documentation
    • Added a comprehensive PRD for EKS kubeconfig integration with Atmos authentication, covering architecture and configuration schemas, CLI kubeconfig workflows (integration and explicit cluster modes), multi-cluster support and merge behavior, XDG-compliant kubeconfig storage, AWS integration approach, generated kubeconfig output and environment handling, testing strategy, security considerations, deployment/metrics, and proposed future enhancements.
Fix Claude Code plugin marketplace schema and update docs @aknysh (#2142) ## what
  • Fix .claude-plugin/marketplace.json schema that caused /plugin marketplace add cloudposse to fail with Invalid schema: plugins.0.source: Invalid input
  • Change source from "." to "./agent-skills" — the source field must point to the directory containing .claude-plugin/plugin.json
  • Remove non-standard metadata.pluginRoot field — not part of the Claude Code marketplace schema
  • Move version/description to top-level marketplace fields where they belong
  • Remove duplicate plugin-level fields (author, homepage, repository, license, keywords, strict) from the marketplace manifest — these already exist in agent-skills/.claude-plugin/plugin.json
  • Change category from "integration" to "development" — matches the official Anthropic marketplace which uses development, productivity, learning, and security
  • Add uninstall/marketplace remove commands to all documentation (PRD, blog, agent-skills doc, ai-assistants doc)
  • Fix PRD plugin.json example to match the actual agent-skills/.claude-plugin/plugin.json file
  • Fix PRD Team Auto-Discovery section to include extraKnownMarketplaces (was missing, wouldn't work without it)

why

  • The original marketplace.json used "source": "." with a custom metadata.pluginRoot field to resolve the path. This is not part of the Claude Code marketplace schema — Claude Code expects source to directly point to the plugin directory containing .claude-plugin/plugin.json
  • Users running /plugin marketplace add cloudposse/atmos got an opaque validation error with no guidance on what was wrong
  • Documentation was missing uninstall/remove instructions, and the PRD had stale examples that didn't match the actual manifest files
  • Verified fix works locally:
    /plugin marketplace add cloudposse/atmos   → Successfully added marketplace: cloudposse
    /plugin install atmos@cloudposse           → ✓ Installed atmos
    /plugin uninstall atmos@cloudposse         → ✓ Uninstalled atmos
    /plugin marketplace remove cloudposse      → ✔ Removed 1 marketplace
    

references

Summary by CodeRabbit

  • Documentation

    • Added uninstall instructions for the Atmos plugin across multiple documentation pages.
  • Chores

    • Updated plugin configuration and restructured marketplace metadata.
    • Added support for additional marketplace sources to streamline plugin discovery and auto-configuration.
Increase PR size thresholds to accommodate AI-assisted development @osterman (#2136) ## what

Updated PR size tier thresholds to better reflect modern AI-assisted development:

  • Medium: 500 → 1,000 lines
  • Large: 1,000 → 5,000 lines
  • Extra Large: new tier (5,001–10,000 lines)
  • Extra Extra Large: new tier (>10,000 lines, triggers warning)

why

Previously, any PR >1,000 lines triggered a Mergify warning. With Claude-assisted development, PRs naturally grow while remaining focused on single, logical changes. The new thresholds reflect this reality: PRs under a few thousand lines are ideal, only those exceeding 10,000 are flagged as too large and should be split into smaller PRs.

changes

  • Added xl_max_size and xxl_label inputs to PR size action
  • Updated workflow to use new threshold values (1,000/5,000/10,000)
  • Migrated Mergify warning from size/xl to size/xxl (>10,000 lines)
  • Documented guidelines in CLAUDE.md for Claude agents

Summary by CodeRabbit

  • New Features

    • Added XXL size label category for pull requests exceeding 10,000 lines of changes.
    • Introduced optional CI/CD failure when XXL-sized PRs are detected.
  • Chores

    • Updated Medium and Large PR size thresholds (Medium: 500→1,000 lines; Large: 1,000→5,000 lines).
    • Adjusted automated review warning message to reflect higher recommended PR size limit.
docs: Add Azure authentication provider documentation @osterman (#2132) ## what

Added comprehensive Azure authentication documentation to the authentication configuration reference pages:

  • Added three new provider sections: Azure Device Code, Azure OIDC (Workload Identity Federation), and Azure CLI
  • Each section includes parameter reference, YAML examples, and usage guidance
  • Added GitHub Actions workflow example for Azure OIDC CI/CD authentication
  • Updated multi-provider example to include Azure alongside AWS and GCP
  • Added cross-links to the Azure Authentication tutorial from all auth documentation pages

why

Azure authentication is fully implemented in the codebase with a comprehensive 530-line tutorial, but it was completely missing from the configuration reference pages (providers.mdx, index.mdx, stacks/auth.mdx). Customers navigating CLI Configuration > Auth > Providers would see AWS and GCP but zero Azure entries, making it appear undocumented. This fix surfaces the existing Azure implementation in the reference docs alongside other cloud providers.

references

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Multi-cloud docs and design patterns added (Multi-Cloud Configuration, Application SDLC Environments).
    • Landing hero now highlights AWS, Azure, and GCP with icons and a /multi-cloud link.
    • Resizable docs sidebar with persisted drag-to-resize behavior.
  • Documentation

    • Extensive Azure auth guides (device-code, OIDC/workload identity, CLI) and Azure tutorials added.
    • Expanded GCP auth content; providers/identities reorganized into per-provider tabs and updated cloud-console wording.
  • Style

    • Cloud logos, TOC and sidebar styling improvements.

🚀 Enhancements

Expose ProcessTemplates and ProcessYamlFunctions flags in public API @aknysh (#2161) ## what
  • Add optional ProcessTemplates and ProcessYamlFunctions controls to the ProcessComponentInStack and ProcessComponentFromContext public API functions in pkg/describe
  • Uses the functional options pattern: WithProcessTemplates(bool) and WithProcessYamlFunctions(bool)
  • Both functions now accept variadic ...ProcessOption — existing callers compile without changes
  • Both flags default to true when omitted, matching current behavior (templates processed if enabled in atmos.yaml, YAML functions always processed)

why

  • The public API hardcoded ProcessTemplates: true and ProcessYamlFunctions: true in processComponentInStackWithConfig, giving embedded consumers no way to disable processing
  • The Atmos CLI already has --process-templates and --process-functions flags, atmos.yaml has templates.settings.enabled, and stack imports have skip_templates_processing — the public API was the only entry point missing this control
  • terraform-provider-utils embeds Atmos and calls these functions from inside a Terraform provider plugin. When ProcessYamlFunctions is true, !terraform.output tags spawn child terraform init processes that conflict with the parent OpenTofu process's plugin cache, causing ETXTBSY ("text file busy") crashes on Linux
  • The provider only needs component backend config, workspace, and vars — it does not need resolved template or YAML function values
  • With this fix, the provider can pass WithProcessTemplates(false) and WithProcessYamlFunctions(false) to avoid the crash entirely

usage

// Disable both template and YAML function resolution
result, err := describe.ProcessComponentInStack(
    component, stack, configPath, basePath,
    describe.WithProcessTemplates(false),
    describe.WithProcessYamlFunctions(false),
)

// Same for ProcessComponentFromContext
result, err := describe.ProcessComponentFromContext(
    params,
    describe.WithProcessTemplates(false),
    describe.WithProcessYamlFunctions(false),
)

// Existing callers — no changes needed (both default to true)
result, err := describe.ProcessComponentInStack(component, stack, configPath, basePath)

tests

Test What It Verifies
TestProcessComponentInStackTemplatesDisabledOnly WithProcessTemplates(false) preserves raw Go template strings while YAML functions remain enabled
TestProcessComponentInStackTemplatesEnabledOnly WithProcessTemplates(true) resolves Go templates while YAML functions are disabled
TestProcessComponentInStackYamlFunctionsDisabledOnly WithProcessYamlFunctions(false) preserves raw YAML function tags while templates remain enabled
TestProcessComponentInStackYamlFunctionsEnabledOnly WithProcessYamlFunctions(true) resolves YAML function tags while templates are disabled
TestProcessComponentInStackBackwardCompatNoOptions Old 4-arg call (no options) still works and returns correct vars
TestProcessComponentFromContextWithProcessingDisabled ProcessComponentFromContext respects WithProcessTemplates(false) functional option

Each flag is tested independently against its own fixture (stack-templates for Go templates, atmos-template-yaml-function for YAML functions), proving the two flags are wired independently.

references

  • Previous related fix (restore public API): docs/fixes/2026-02-15-restore-component-processor-public-api.md
  • Previous related fix (serialize ReadDataSource): cloudposse/terraform-provider-utils#523

Summary by CodeRabbit

  • New Features

    • Added optional public flags (via functional options) to enable or disable template processing and YAML function processing for component description calls, preserving backward compatibility.
  • Documentation

    • Added usage guidance and compatibility notes for the new processing controls.
  • Tests

    • Added tests covering default behavior, enabling/disabling each processing flag, and consistency between different invocation paths.
fix: propagate auth to all YAML functions in multi-component execution @aknysh (#2140) ## what
  • Propagate Atmos authentication (SSO credentials) to all YAML functions and Go templates when running multi-component execution (--all, --everything)
  • Fix custom-gcl lint binary to build with Go 1.26 toolchain

Auth propagation fix

  • Create AuthManager in ExecuteTerraformQuery before calling ExecuteDescribeStacks (was passing nil)
  • Propagate both AuthContext and AuthManager on configAndStacksInfo in describe_stacks.go for all 4 component types (terraform, helmfile, packer, ansible)
  • Inject authbridge.Resolver into identity-aware stores for !store/!store.get auth support

Lint toolchain fix

  • Bump .custom-gcl.yml from v2.5.0 to v2.10.1 (Go 1.26 support added in v2.9.0)
  • Add GOTOOLCHAIN override in Makefile so custom-gcl is always built with the project's Go version

why

  • When running atmos terraform plan --all -s <stack>, YAML functions like !terraform.state, !terraform.output, !aws.*, !store, and Go template atmos.Component() failed to use Atmos-managed authentication (e.g., AWS SSO). Single-component execution worked correctly because ExecuteTerraform creates an AuthManager, but the multi-component path (ExecuteTerraformQuery) did not.
  • The custom-gcl binary was built with Go 1.25 after the project switched to Go 1.26 in go.mod, causing the golangci-lint pre-commit hook to fail with "Go language version used to build golangci-lint is lower than targeted"

Affected YAML functions

Function Auth mechanism Fixed
!terraform.state stackInfo.AuthContext + AuthManager Yes
!terraform.output stackInfo.AuthContext + AuthManager Yes
!aws.account_id stackInfo.AuthContext.AWS Yes
!aws.caller_identity_arn stackInfo.AuthContext.AWS Yes
!aws.caller_identity_user_id stackInfo.AuthContext.AWS Yes
!aws.region stackInfo.AuthContext.AWS Yes
!aws.organization_id stackInfo.AuthContext.AWS Yes
!store / !store.get authbridge.Resolver Yes
atmos.Component() configAndStacksInfo.AuthContext Yes

references

Summary by CodeRabbit

  • Bug Fixes

    • YAML and template functions (including !terraform.state and !terraform.output) now use authentication correctly during multi-component runs (e.g., --all).
  • Chores

    • Bumped release version to v2.10.1.
    • Removed a custom linter from the build config.
    • Build now invokes an explicit Go toolchain for the linter step.
  • Tests

    • Added extensive unit tests covering auth creation and propagation for YAML/template functions.
fix: Use atmos_component for source provisioner workdir paths @osterman (#2137) ## what
  • Fixed source provisioner (JIT vendoring) to use atmos_component (instance name) for workdir paths, matching the fix from PR #2093
  • When metadata.component differs from the component instance name (e.g., instance demo-cluster-codepipeline-iac inheriting from base demo-cluster-codepipeline), both source pull and terraform plan/init now create the same workdir directory
  • Updated determineSourceTargetDirectory() and buildWorkdirPath() to check for atmos_component before falling back to the passed component name
  • Added comprehensive tests covering the atmos_component workdir isolation case

why

This is a follow-up to PR #2093 which fixed workdir collision for parallel component instances. However, that PR only updated the workdir provisioner. When both source and workdir are enabled, the workdir provisioner defers to the source provisioner (AutoProvisionSource), which was still using extractComponentName() that returns the base component name. This caused JIT vendoring to write source code to a different workdir path than terraform plan/init expects, breaking the fix.

references

Closes #2134
Follows PR #2093 (fix for issue #2091)

Summary by CodeRabbit

  • Refactor

    • Consolidated workdir path construction logic into a centralized utility for improved consistency.
    • Workdir paths now properly utilize component instance names when specified.
  • Tests

    • Added comprehensive test coverage for workdir path resolution and instance name handling.
Fix identity prompts to respect --interactive flag @[copilot-swe-agent[bot]](https://github.com/apps/copilot-swe-agent) (#2130) Identity selection prompts appeared even when `--interactive=false` was specified. The `isInteractive()` function in `pkg/auth/manager.go` checked TTY and CI status but not the flag itself.

Changes

  • pkg/auth/manager.go: Modified isInteractive() to check viper.GetBool("interactive") before TTY/CI checks
  • pkg/auth/interactive_test.go: Added test coverage for flag behavior

Implementation

func isInteractive() bool {
    // Check if interactive mode is enabled via flag or environment.
    if !viper.GetBool("interactive") {
        return false
    }
    
    // Check if stdin is a TTY and not in CI.
    return term.IsTTYSupportForStdin() && !telemetry.IsCI()
}

Now aligns with the existing pattern in pkg/flags/interactive.go.

Behavior

Command Result (no default identity)
atmos list stacks --interactive=false Error: "no default identity configured"
atmos list stacks (default) Prompts user (if TTY)
ATMOS_INTERACTIVE=false atmos list stacks Error
Original prompt

This section details on the original issue you should resolve

<issue_title>Identity related prompts doesn't follow the "Interactive" settings</issue_title>
<issue_description>### Describe the Bug

When identities are present and no default is set.
Atmos will always prompt for identity.

Expected Behavior

If no default is set, and multiple identity are present, atmos should exit with an error code.

Steps to Reproduce

  • Add 2+ identities to the atmos.yaml files (with no default value set)
  • Execute atmos list stacks --interactive=false

Screenshots

$ atmos list stacks --interactive=false                                                                                                                                                                                                                                                                                  
┃ No default identity configured. Please choose an identity:                                                                                                                                                                                                                                                                                                                
┃ Press ctrl+c or esc to exit> dev                                                                                                                                                                                                                                                                                                                                                                     
┃   dev2                                      

Environment

No response

Additional Context

No response</issue_description>

Comments on the Issue (you are @copilot in this section)

@osterman That does appear to be the incorrect behavior.

✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.

fix: propagate component-type level dependencies through stack processor @osterman (#2127) ## what
  • Stack processor now extracts and merges dependencies from global (Scope 1) and component-type (Scope 2) sections
  • Component-type level dependencies defined via terraform.dependencies.tools (and helmfile/packer/ansible equivalents) now flow through to component configs
  • Toolchain auto-install is now triggered for mixin-pattern dependencies defined at the component-type level
  • Dependencies merge chain now includes all 3 scopes: global/component-type → base component → component instance

why

The bug prevented users from configuring dependencies at the component-type level (Scope 2) via mixin patterns like:

terraform:
  dependencies:
    tools:
      terraform: "1.6.0"

The stack processor dropped this data before it reached the toolchain resolver, so auto-install never triggered. Users reported exec: "terraform": executable file not found in $PATH errors when they configured Scope 2 dependencies.

This fix ensures all 3 scopes of dependencies are properly extracted, merged with correct precedence, and propagated through to component sections where the resolver can access them.

references

Fixes the bug discussed in the Slack conversation where Jonathan Rose configured Scope 2 dependencies but toolchain auto-install wasn't triggered.

Changes:

  • errors/errors.go: Added 5 sentinel errors for dependencies validation
  • internal/exec/stack_processor_process_stacks_helpers.go: Added GlobalDependencies field to ComponentProcessorOptions
  • internal/exec/stack_processor_process_stacks.go: Extract and merge dependencies from all 3 scopes, pass through opts builders
  • internal/exec/stack_processor_merge.go: Updated merge chain to include global dependencies with lowest priority
  • tests/: Added 2 integration tests proving Scope 2 dependencies trigger toolchain auto-install

Summary by CodeRabbit

  • New Features

    • Declare dependencies at global, component-type, and component scopes; these are merged with defined precedence and propagated into component processing.
  • Chores

    • Added validation error sentinels to surface invalid dependencies sections across component types.
  • Tests

    • Added integration fixtures and unit tests covering dependency propagation, precedence, inheritance, binary/toolchain handling, and invalid-section error paths.

Don't miss a new atmos release

NewReleases is sending notifications on new releases.