github maziggy/bambuddy v0.2.4b2-daily.20260502
Daily Beta Build v0.2.4b2-daily.20260502

pre-release4 hours ago

Note

This is a daily beta build (2026-05-02). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Added

  • AMS slot Load / Unload from the printer card (#891, reported by NNeerr00, +1 from cadtoolbox) — The MQTT primitives for "load filament from a tray" and "unload the currently loaded tray" already existed in bambu_mqtt.py (reverse-engineered from BambuStudio captures, including the H2D dual-extruder right-external case captured fresh during this work) but were unused — there was no HTTP route and no UI. Net effect: every Load / Unload had to happen on the printer touchscreen, and external-spool users on dual-nozzle H2D had no way to drive Ext-R from the desktop at all. Backend: new POST /printers/{id}/ams/load?tray_id={int} and POST /printers/{id}/ams/unload, both gated on Permission.PRINTERS_CONTROL. The load route validates tray_id ∈ {0..15, 254, 255} (AMS slots, single-external/Ext-L, Ext-R respectively) and returns a human-readable target in the success message ("AMS 0 slot 1", "external spool", "Ext-R") so the UI toast tells the user which spool the printer is now feeding from. MQTT primitive update: ams_load_filament gains a third encoding branch for tray_id=255 matching the BambuStudio capture verbatim — ams_id=255, slot_id=0 (the right-extruder index, not a slot index — Bambu's load command on dual-extruder externals encodes the destination extruder, not the source slot), target=255, and curr_temp = tar_temp = right-nozzle temp (read from state.temperatures["nozzle_2"], falling back to 215 °C if the right nozzle is cold or unknown — the printer rejects nonsensical temps, so a warm fallback is safer than -1). The existing tray_id=254 branch is preserved verbatim (slot_id=254, curr/tar=-1) since that came from a single-extruder capture and is known to work; no risk of regression on existing single-external setups. UI: the existing AMS slot popover (the one with "Re-read RFID") gains two new entries — "Load" (posts tray_id = ams.id * 4 + slotIdx) and "Unload" (no params, global on the currently-loaded slot). The external spool slot — which had no popover at all before — gets one with the same Load + Unload entries, and on dual-nozzle H2D each external slot (Ext-L tray_id=254, Ext-R tray_id=255) drives its own extruder. The menu is hidden while state === 'RUNNING' (parallels the existing RFID re-read gating). i18n: printers.ams.load, printers.ams.unload, plus four new toast strings (loadInitiated, unloadInitiated, failedToLoad, failedToUnload) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 16 new tests pin the contract: 5 unit tests in test_bambu_mqtt.py::TestAmsLoadFilamentEncoding (AMS slot encoding, Ext-L preserves legacy capture, Ext-R uses the new captured shape with actual right-nozzle temp, Ext-R falls back to 215 °C when cold, disconnected client doesn't publish); 11 integration tests in test_printers_api.py::TestAMSLoadUnloadAPI (load: invalid tray_id 400, not-found 404, not-connected 400, AMS slot success with derived ams_id*4+slot math, Ext-L success, Ext-R success, MQTT failure 500; unload: not-found, not-connected, success, MQTT failure 500); 4 frontend tests in PrintersPageAmsLoadUnload.test.tsx (Load posts the right tray_id, Unload posts with no params, menu hidden while RUNNING, external spool's tray_id=254 round-trips through the route).
  • API keys can read Bambu Cloud presets on the owner's behalf (#1182, reported by turulix) — Tim is building a fully automated headless slicing pipeline against Bambuddy's API and hit the wall flagged in the previous round of cloud-auth work (#665): /cloud/* routes resolve cloud_token per-user from User.cloud_token, but the auth gate (require_permission_if_auth_enabled, auth.py:856) returned None for API-keyed requests, so the route fell back to the global Settings-table token, which only carries a value in auth-disabled deployments. Net effect on auth-enabled deployments: API keys reached the gate just fine, then /cloud/filaments always saw user=None, called get_stored_token(db, None) against an empty Settings table, and returned 401 / empty results — no path to read the slicer presets, filament catalogue, or device list that a CLI workflow needs. The data model treated API keys as standalone tokens with no owner (APIKey had id, name, key_hash, scope flags, and printer_ids — no user_id), so even if the gate wanted to delegate the cloud lookup, there was no User to delegate to. The fix: make API keys carry an owner, route /cloud/* lookups through that owner, and gate the new capability behind an explicit opt-in scope so existing automation doesn't gain cloud-read access on upgrade. Concretely: (1) APIKey gains user_id (FK to users.id, ON DELETE CASCADE — Postgres enforces, SQLite plus an explicit DELETE FROM api_keys WHERE user_id = ? in the user-delete route since SQLite ships FK enforcement off; the project's existing pattern at users.py:397-406 for created_by_id cleanup) and can_access_cloud (BOOLEAN DEFAULT 0 — opt-in, never set on legacy rows). (2) The auth gate now returns the owner User when it validates an API key with user_id set, so /cloud/* routes naturally resolve user.cloud_token the same way they do for JWT-authed sessions. Permission semantics are preserved — API keys still bypass the per-route permission check (their scopes live on the row itself), the User return is only so cloud-aware routes can read per-user state. Legacy ownerless keys (user_id IS NULL) keep returning None, stay anonymous, and continue working against every non-cloud route exactly as before. (3) A router-level dependency on the /cloud/* APIRouter enforces three independent fences for API-keyed callers: user_id IS NOT NULL (legacy keys → 401 with "recreate it from Settings → API Keys" — explicit recreate path rather than silently degrading), can_access_cloud=True (otherwise 403 with "Enable 'Allow cloud access' on the key"), and build_authenticated_cloud returning a service (otherwise 401 with the existing token-not-set error — unchanged for JWT flow). The router-level dep duplicates the API-key validation done by the regular auth gate (router-level deps run before route-level deps in FastAPI, so request.state isn't populated yet) — the cost is one extra SELECT FROM api_keys per cloud request, bounded and cheap with the key_prefix index. (4) The create route stamps user_id = current_user.id from the creator and rejects can_access_cloud=True when auth is disabled (no per-user cloud_token storage exists in that mode — fail loudly at create time rather than silently producing a non-functional key). PATCH route rejects flipping can_access_cloud to True on a legacy ownerless key for the same reason — force recreate. (5) APIKeyResponse exposes user_id so the UI can show ownership at a glance: a "Cloud" badge for cloud-enabled keys and a "Legacy" badge with hover tooltip ("Created before per-user ownership; recreate to use cloud access") for ownerless rows. The form gains an "Allow cloud access" checkbox, default off. Migration: two idempotent ALTER TABLE api_keys ADD COLUMN (user_id INTEGER REFERENCES users(id) ON DELETE CASCADE and can_access_cloud BOOLEAN DEFAULT 0) plus an index on user_id for the auth-gate's owner→keys lookup that runs on every API-keyed request. i18n: 5 new keys (settings.cloudAccess, settings.cloudAccessDescription, settings.cloudBadge, settings.legacyKey, settings.legacyKeyTooltip) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copies pending native translation (matches the project's existing flow for newly-added user-facing features). 9 backend integration tests in test_api_key_cloud_access.py: create stamps owner + cloud flag, defaults off when not asked for, rejected when auth disabled (no per-user storage), PATCH rejected on legacy keys; cloud router rejects legacy keys with the recreate copy, rejects owned-but-no-cloud-flag keys with the enable-cloud-access copy, lets owned-and-flagged keys through with owner's cloud_token in the response, JWT callers unaffected (gate is no-op for non-API-keyed); user-delete CASCADEs the API keys via the explicit DELETE in the route. 2 frontend SettingsPage tests pin the badge rendering matrix (Cloud badge present on can_access_cloud=true, Legacy badge present on user_id=null, neither rendered on a normal owned non-cloud key) and the create-form contract (toggling "Allow cloud access" results in can_access_cloud=true in the POST body). Permission semantics for the new fence are the only behavioural change for existing API keys: keys created before this release become "legacy" rows and are rejected at /cloud/* with the recreate message; every other endpoint they were used against — queue, status, control — is untouched.
  • Home Assistant addon detection — Settings → Updates and the in-app update banner now defer to the HA Supervisor (#1167, reported by Spegeli) — Bambuddy already shipped HA_URL/HA_TOKEN env-var support specifically labelled "for HA Add-on deployments" (#283) and a community-maintained HA addon (hobbypunk90/homeassistant-addon-bambuddy) exists upstream, so an HA-supervised installation is a real first-class deployment shape. Until now though, the update UI didn't know about it: HA addon users got the same "Update available!" banner as everyone else and, if they clicked through to Settings, saw the docker-compose snippet ("docker compose pull && docker compose up -d") which they cannot run from inside an HA addon container — that's the Supervisor's job. Detection uses the canonical signal: HA Supervisor injects SUPERVISOR_TOKEN into every addon container, and that variable is not set in any other environment. A new _is_ha_addon() helper in backend/app/api/routes/updates.py flips a request-level boolean which /updates/check surfaces as is_ha_addon: bool + an extended update_method: 'git' | 'docker' | 'ha_addon' enum. The check is checked before Docker on /updates/apply because HA addons are Docker containers — checking docker first would mis-classify them and serve the wrong message; the response also keeps is_docker: true alongside is_ha_addon: true so older frontend bundles still hit a managed-deployment branch (degrading to the Docker UX) instead of rendering an in-app Install button that can't work. Frontend branches identically: SettingsPage.tsx's update card checks is_ha_addon first and renders "Updates are managed by the Home Assistant Supervisor. Open Settings → Add-ons → Bambuddy in Home Assistant to install the new version." in place of the docker-compose hint; Layout.tsx's update banner is suppressed entirely for HA addons since the HA Supervisor's own update notification already surfaces the new version natively in the HA UI and a duplicate Bambuddy banner would just be noise that links to a page that says "go to HA". Plain Docker deployments are unaffected — the existing docker-compose hint and the in-app banner still render the same way they did. Localised across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) with full translations of the new settings.updateViaHomeAssistant string. 6 new tests pin the contract: 3 backend unit tests for _is_ha_addon() (env var present → true, absent → false, empty string treated as unset to guard against shells that export it empty), 1 backend integration test for the HA-precedes-Docker rejection on /updates/apply (asserts the message says "Home Assistant" and not "Docker Compose"), 2 backend integration tests for /updates/check covering the HA-addon branch (update_method == "ha_addon", both flags true) and the plain-Docker branch (is_ha_addon: false, update_method == "docker"); 2 frontend SettingsPage tests pin the mutually-exclusive UI rendering (HA branch shows the HA copy and not the docker-compose snippet; Docker branch shows the snippet and not the HA copy, neither shows the Install button); 2 frontend Layout tests pin the banner suppression for HA and its retention for plain Docker.
  • OIDC auto-created users now get readable usernames and land in a configurable group (#1173) — Two improvements to the OIDC auto-create flow: (1) Username derivation: Bambuddy now derives the username from preferred_username, then name, before falling back to the opaque provider_sub[:30]. Each candidate is sanitized independently — alphanumeric plus ./-/_, whitespace collapsed, deduplication suffix appended on collision — so a value that strips to empty (e.g. "!!!") correctly falls through to the next option rather than silently producing "oidcuser". (2) Default group: each OIDC provider gains a default_group_id field. When set, auto-created users are placed in that group; when unset, the existing "Viewers" fallback is preserved, so behaviour is unchanged for existing deployments. The column is nullable with ON DELETE SET NULL; SQLite does not enforce FK constraints here, so a deleted configured group falls through to Viewers at runtime. default_group_id is validated on create/update (422 on a non-existent group). Exposed in the OIDC settings form as a group dropdown. Limitation: to clear a configured default group, delete the group or select a different one — explicit reset-to-null is not currently supported.
  • Filament Track Switch (FTS) support — print modal filament dropdown is no longer empty when an X2D / H2D has the FTS accessory installed (#1162, reported by mkavalecz) — When the FTS accessory is installed the printer's MQTT changes one nibble of the per-AMS info bitmask: bits 8-11 flip from a fixed extruder ID (0x0 / 0x1) to 0xE ("uninitialized"), because the AMS is no longer wired to a single nozzle — the FTS dynamically routes any slot to either extruder. Bambuddy's MQTT parser already skipped 0xE entries when building ams_extruder_map (matching BambuStudio's reading for boot-time transient state), so with the FTS installed the map ended up empty and the print modal's filament dropdown — which filters by extruderId === nozzle_id to prevent cross-nozzle assignment ("position of left hotend is abnormal" failures) — filtered out every loaded slot. Net effect: empty Filament Mapping dropdown on every dual-nozzle print with the FTS, even when the AMS was fully loaded with the right material. Detection comes from a new MQTT field — print.device.fila_switch — which is non-null only when the accessory is installed; it carries the routing topology as two arrays: in[track] = currently fed slot (-1 = empty) and out[track] = extruder this track terminates at. The fix surfaces this through a new FilaSwitchState dataclass on PrinterState (installed, in_slots, out_extruders, stat, info) and the equivalent FilaSwitchResponse Pydantic schema on the GET /printers/{id}/status route. Frontend (useFilamentMapping.ts + FilamentMapping.tsx) skips the per-extruder filter when printerStatus.fila_switch?.installed === true so any compatible AMS slot can satisfy any nozzle's filament requirement, since the FTS handles the routing. Slots currently fed into a track also get a routing badge in the dropdown — [L] or [R] — so the user can tell at a glance which slot the FTS is currently routing where (idle slots get no badge: they can be routed to either extruder on demand). The hard "no cross-nozzle assignment" filter on real dual-nozzle printers without the FTS stays untouched (still trips the same way it always has — fila_switch == null keeps the existing behaviour). 4 backend tests in test_bambu_mqtt.py::TestFilamentTrackSwitchDetection (default-not-installed, detect-from-MQTT-using-the-reporter's-bundle, no-fila_switch-field-stays-not-installed, missing-in-out-arrays-don't-crash) and 2 frontend tests in useFilamentMapping.test.ts (FTS-active drops the nozzle filter; explicit fila_switch: null keeps the filter applied). Upstream fila_switch payloads with anything other than the documented shape are tolerated — installed flips on the presence of the field, the routing arrays default to empty lists if missing, and the dropdown skips the badge for slots not currently in in_slots.

Fixed

  • iframe embedding from trusted origins (e.g. Home Assistant Webpage panel) no longer blocked (#1191, reported by azurusnova) — Bambuddy ships strict anti-clickjacking headers (X-Frame-Options: SAMEORIGIN and CSP frame-ancestors 'none') by default, which protects internet-exposed deployments from being embedded by hostile sites. But it also broke a documented integration path: Home Assistant's Webpage dashboard panel embeds Bambuddy via <iframe> on a different origin (HA on :8123, Bambuddy on :8000), and the SAMEORIGIN value is port-strict, so even same-LAN trusted setups got "refused to connect". A new TRUSTED_FRAME_ORIGINS env var takes a comma-separated list of scheme://host[:port] origins; when set, the middleware drops X-Frame-Options (modern browsers honor frame-ancestors, and the legacy ALLOW-FROM <url> syntax is deprecated and inconsistent across vendors) and the CSP frame-ancestors directive becomes 'self' <origin> <origin>.... The default — empty env var — keeps the strict 'none' behavior, so Docker / bare-metal users without HA see no behavioural change. Origin validation happens at startup: only http:// and https:// are accepted, paths/query/fragments/wildcards are rejected with a warning (one bad entry doesn't take the deployment down — it's just dropped from the allowlist). The gcode-viewer route's frame-ancestors 'self' (same-origin embed for the in-app gcode preview iframe) also includes the allowlist when configured, so HA users embedding Bambuddy can still open the gcode viewer modal. 16 new tests in test_security_headers.py: 12 unit tests for the env-var parser (empty / unset / single / multiple / whitespace / empty-segment / non-http scheme dropped / missing host dropped / path dropped / query+fragment dropped / wildcard dropped / trailing-slash kept) and 4 integration tests for the middleware (default-strict emits SAMEORIGIN + 'none', allowlist relaxes CSP and drops X-Frame-Options, /docs branch also honors the allowlist, other security headers like X-Content-Type-Options and Referrer-Policy are unaffected in both modes). Documented in the Docker env-var reference page on the wiki and in .env.example.
  • Virtual Printer queue mode auto-dispatched onto the wrong colour when multiple compatible printers were available (#1188, reported by EdwardChamberlain) — Sending a sliced 3MF to a queue-mode VP via Orca / Studio with auto-dispatch on caused Bambuddy to schedule the job onto a printer of the right model but the wrong loaded filament: a print sliced for matte white PLA would land on a printer with no white loaded, and the printer would start the job using whatever was the closest available match. Edward's diagnosis was exact (virtual_printer/manager.py:325-326): the manual /api/v1/print-queue/ POST flow extracts the 3MF's per-slot filament requirements at queue-add time and writes required_filament_types, filament_overrides, and ams_mapping on the resulting PrintQueueItem, so the scheduler's color-match enforcement (print_scheduler.py:512 — keys on filament_overrides[].force_color_match === true) actually runs. The VP queue-write path (_add_to_print_queue) skipped all of that and built a bare PrintQueueItem with only printer_id, target_model, archive_id, plate_id, position, status, manual_start. Net effect: the scheduler reached the model-only-matching fallback and accepted the first available printer of the target model regardless of loaded colour, exactly as he described. Fix: the scheduler's existing _get_filament_requirements 3MF parser is extracted into a shared helper (backend/app/services/filament_requirements.py:extract_filament_requirements) so the VP path can reuse it at upload time. The VP's _add_to_print_queue now calls that helper after archiving and populates required_filament_types unconditionally (cheap; helps the scheduler reject obvious type mismatches even without force_color_match); and writes filament_overrides with force_color_match: true per consumed slot when a new per-VP setting queue_force_color_match is on. Default is off to preserve current behaviour for upgraders — a fresh-install user who wants the bug-free behaviour flips the toggle once on the VP card; an existing user gets exactly the model-only-matching they had before until they opt in. Auto-dispatch onto the wrong material happens loudly enough that anyone affected can find the toggle. Why default-off rather than default-on: existing automation that relies on "send to queue VP, get printed somewhere" without caring about colour shouldn't silently start blocking on colour matching after an upgrade. The toggle has clear UI copy (virtualPrinter.queueForceColorMatch) explaining the trade-off. Defence in depth: a malformed or unparseable 3MF (e.g. fake bytes from a misconfigured upload tool) leaves both fields None and the scheduler falls back to model-only matching, matching pre-fix behaviour for the unhappy path. The scheduler itself is unchanged — it already handled force_color_match correctly when the field was populated; the bug was purely the VP path not populating it. Schema: one nullable column virtual_printers.queue_force_color_match BOOLEAN DEFAULT 0/FALSE (Postgres-safe) added via the existing _safe_execute migration pattern. API: VirtualPrinterCreate and VirtualPrinterUpdate Pydantic schemas + _vp_to_dict response shape carry queue_force_color_match, the create + update routes wire it through to the model, and VirtualPrinterInstance constructor + multiVirtualPrinterApi TypeScript client mirror the field. UI: new toggle on VirtualPrinterCard rendered only when mode === 'print_queue' (parallels the existing auto_dispatch toggle's mode-gating), with pendingAction state for the in-flight indicator. i18n: new virtualPrinter.queueForceColorMatch.{title,description} keys in all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 11 new tests: 8 in test_filament_requirements.py covering the extracted parser end-to-end (per-slot dicts, zero-use slots filtered, plate filtering, no-plate flat-walk fallback, unparseable / missing / config-less files, sorted output); 3 in test_virtual_printer.py::TestVirtualPrinterInstance covering the VP write path (setting-off → only required_filament_types populated; setting-on → filament_overrides populated with force_color_match: true per slot; unparseable 3MF → both fields None, no crash). Existing scheduler tests still pass against the refactored helper (verified end-to-end across the scheduler / virtual_printer / print_queue / filament test suites — 479 tests). Edward's "out of scope nice-to-have" suggestion of a "Requires Color Match" pill on queue cards is deferred to a follow-up so this PR stays scoped to his repro.
  • Slicing a library file via API key fails with "no Bambu Cloud session is stored" even when the key has cloud access (#1182 follow-up, reported by turulix) — Tim shipped the headless slicing pipeline #1182 was filed for, then hit a second wall: GET /api/v1/cloud/settings returned the cloud preset IDs correctly (the /cloud/* router-level gate from #1182 was doing its job), but POST /api/v1/library/files/{id}/slice with those IDs in the request body failed the slice job with error_status: 400, error_detail: "Cloud preset selected for printer, but no Bambu Cloud session is stored. Sign in to Bambu Cloud and retry." Cause: the /cloud/* fix routes the API key's owner User through cloud_caller (a router-level gate stashes the owner on request.state.api_key_owner, route-level deps pull it back out), but the slice route lives on /library/* — different router, no gate, so when the auth dep returned None for the API-keyed request the slice route passed current_user_id=None straight through to _run_slicer_with_fallback_resolve_cloud(db, user=None)get_stored_token(db, None), which falls back to the auth-disabled global Settings table. That table is empty in auth-enabled deployments, so cloud preset resolution failed even though the key's owner User had a perfectly valid cloud_token on their User row. Fix is a new route-level dep resolve_api_key_cloud_owner in cloud.py that's permissive (returns the owner User if the key has can_access_cloud=true, otherwise None — never raises) so it can be safely added to non-/cloud/* routes without breaking the local-presets path: a request with an API key that lacks the cloud scope still slices fine against local presets, and only fails with the existing "no Bambu Cloud session" error if it actually selects a cloud preset. Wired into POST /library/files/{id}/slice (Tim's blocker) and GET /slicer/presets (the SliceModal preset dropdown source — same root cause, would have hit anyone using the UI through an API-keyed reverse proxy). Both routes now resolve the cloud-token owner via current_user or api_key_cloud_owner instead of current_user.id if current_user else None. The auth gate's None-return for API keys is unchanged — keeping that fix scoped to the routes that actually need cloud-token resolution prevents accidental scope creep into other routes that fence on current_user is None. 4 new integration tests in test_api_key_cloud_access.py::TestSliceRouteCloudOwnerResolution pin the dep contract: returns the owner for a key with can_access_cloud=True and a valid owner; returns None for an owned key without the cloud scope (so cloud presets still 400 cleanly, local presets still slice); returns None for legacy ownerless keys; no-op for JWT and anonymous callers.
  • Project cover photo thumbnail too small to recognise the print (#1155 follow-up, reported by smandon) — The 40×40 thumbnail smandon's MakerWorld download workflow relied on for "is this the model I'm looking for?" wasn't readable at that size; he asked for either a larger thumbnail or a click-to-enlarge full preview. Enlarging the thumbnail itself would shift the card layout and cost the dense grid he chose to use for browsing many projects, so the fix keeps the 40×40 thumbnail and shows a portal-mounted 384×384 popover on hover. The popover renders the full image in object-contain so tall portrait MakerWorld photos aren't cropped to a square, has pointer-events-none so it can't intercept hover and create a flicker loop, and z-[100] so it stacks above every sibling card in the grid. Why a portal: ProjectCard carries overflow-hidden (for its rounded-corner clipping and the color accent bar), so an in-tree popover gets clipped by the card the moment it extends past the card's bounds — exactly the cut-off behaviour smandon reported on the second iteration. Rendering via createPortal(..., document.body) escapes every ancestor clipping context, and position: fixed with measurements from getBoundingClientRect() keeps the popover pinned next to the thumbnail regardless of where the card sits in the grid. Edge handling: if the thumbnail is near the viewport's right edge the popover flips to the LEFT side of the thumbnail; vertical position is clamped so the popover never overflows the window top or bottom. The thumbnail's own onClick is stopPropagation'd so hovering the popover area never accidentally triggers the parent card's "open project" navigation. 2 new tests in ProjectsPage.test.tsx pin the contract: hovering mounts the popover at document.body level (not nested in the card — a future refactor that drops the portal would re-introduce the clipping bug, and the test catches that); leaving unmounts it; the popover img points at the same cover-image URL as the small thumbnail with object-contain; cards without a cover_image_filename never mount the portal-rendering component (so a hover doesn't flash an empty preview).
  • Spool edit form lost the Extra Colours value on reopen, Dual Color rendered identically to Gradient, and the Sparkle / checkerboard visuals were too subtle (#1154 follow-up, reported by maugsburger) — Four issues against the multi-colour swatch work that landed for #1154. (1) Extra Colours input didn't hydrate on edit reopen: ColorSection's draft buffer was seeded once via useState(formData.extra_colors), but SpoolFormModal opens before its own useEffect populates formData from the spool record — so by the time the saved value landed, the input's local state had already been initialised to '' and never re-synced. The COLOR preview banner above the input rendered correctly (consumes formData directly), making it obvious the data WAS persisted; only the input was stuck blank, which the user then had to retype to save anything else. Fix: a ref-guarded useEffect resyncs extraColorsDraft when formData.extra_colors changes via an external update (e.g. modal opening with a spool); the ref is updated inside commitExtraColors so the user's own typing is round-tripped without the resync clobbering it. (2) Dual Color and Gradient produced the same diagonal blend: buildColorLayer in filamentSwatchHelpers.ts ran the same linear-gradient(135deg, ...) for both effect types, so a "Dual Color" spool was visually indistinguishable from a "Gradient" one. Real dual-colour spools have two distinct bars on the reel — that's the whole point of the variant. Fix: when effect_type is dual-color or tri-color, build the colour layer as linear-gradient(to right, c1 0% X%, c2 X% Y%, ...) with CSS double-position stops (so the colour change is a hard line rather than a blend region) and equal-width segments across the stops; gradient keeps the original 135° smooth blend. The existing multicolor conic-gradient path is untouched. (3) Sparkle effect was almost invisible on card-sized swatches: the original 4-dot pattern (each ~1px) read fine on the small inline swatch but disappeared on the 60-pixel-tall inventory card banners — exactly where the user actually identifies a spool. Bumped to 13 flecks in mixed sizes (1px / 1.5px / 2px) and varying opacity (0.65 → 1.0) to give a depth-of-field "metal flake" feeling, distinct from solid + multi-colour. (4) Checkerboard cell density scaled with the swatch: the previous helper put repeating-conic-gradient(...) in the background-image and the caller applied background-size: cover, so the same 4-cell pattern was either tiny squares on a small swatch or four huge squares on a card-sized banner. Made buildFilamentBackground() return { backgroundImage, backgroundSize } with per-layer sizes — painted layers stay cover, the checkerboard gets a fixed 12px tile so the cell density stays consistent regardless of element size and clearly reads as a transparency indicator rather than a multi-colour stripe. Updated the three existing call sites (InventoryPage group banner + spool card, ColorSection preview) to spread the returned style object directly. 8 new frontend tests cover the four fixes: hard-split contract for Dual/Tri Color (3 tests + 1 regression guard that Dual ≠ Gradient for the same stops); Sparkle prominence (≥ 10 distinct radial-gradient layers in the rendered background); checkerboard density (last backgroundSize layer is a fixed pixel value, not cover); 4 hydration tests pinning the input restore path (fills when formData arrives via parent update, resyncs when the spool changes mid-form, doesn't clobber live user typing, clears when the new spool has no extra_colors).
  • Pending review card and the resulting archive name disagreed; .gcode.3mf filename suffix wasn't fully stripped (#1152 follow-up, reported by smandon) — Two distinct holes in the original #1152 fix surfaced when smandon retested on the daily build. (1) Suffix stripping was incomplete: Bambu Studio's "Send to printer" dialog typically writes files like Plate_1.gcode.3mf (a sliced gcode payload wrapped in a 3MF container), but the archive's display stem was computed via Path(name).stem, which only drops the last suffix and left the user staring at Plate_1.gcode in the archive UI. (2) The review card and the archive disagreed on what the print was called: the pending-uploads panel always rendered the raw FTP filename, while the eventual PrintArchive.print_name resolved from the 3MF's embedded title (or, with the toggle on filename, the filename stem). Net effect: the user saw Plate_1.gcode in the review card and Some Creator's Title in the archive grid for the same item, with no toggle that flipped both views in lockstep. Fix has three pieces: a new resolve_display_stem() helper in archive.py that strips .gcode.3mf / .3mf / .gcode (case-insensitive) so both the archive and the review-side normalisation produce the same canonical stem; a new PendingUpload.metadata_print_name column populated at FTP-receive time by peeking at the 3MF's embedded title (so /pending-uploads/ list calls don't have to reopen every 3MF on every render); and a new PendingUploadResponse.display_name computed field that mirrors archive_print's exact precedence — filename toggle: stripped stem; metadata toggle (default): cached title or stripped stem. Frontend's PendingUploadsPanel reads upload.display_name (with upload.filename as a defensive fallback for any pre-migration row), and the raw filename is exposed as a tooltip so users can still inspect what actually arrived over FTP. Migration is one idempotent ALTER TABLE pending_uploads ADD COLUMN metadata_print_name VARCHAR(255) (Postgres/SQLite-safe); existing pending rows have NULL there and gracefully fall back to filename-stem behaviour. 14 unit tests pin the stripping rules (Plate_1.gcode.3mfPlate_1, mixed case, dots in the middle, edge .3mf-only / .gcode-only, full-path inputs); 6 integration tests pin the response contract (default toggle uses metadata title when present, falls back to stripped stem when absent, filename toggle overrides metadata, filename toggle still strips the double suffix, GET /{id} exposes the same field, whitespace-only metadata behaves like absent); 3 frontend tests pin the review card's render path (resolved name shown, fallback to filename when display_name is empty, raw filename available via tooltip).
  • SpoolBuddy SSH update fails with "permission denied for user spoolbuddy" after Bambuddy keypair rotation (reported during user testing) — Bambuddy's data dir at <DATA_DIR>/spoolbuddy/ssh/ can get recreated outside the daemon's control (volume remount, container recreate, fresh deploy), at which point get_or_create_keypair() generates a new ed25519 keypair. The SpoolBuddy daemon previously only fetched and deployed Bambuddy's public key at registration time (/devices/register), so any rotation after a successful registration left the device's ~/.ssh/authorized_keys pointing at a defunct public half — every "Update" click from the Bambuddy UI then failed with Connection closed by authenticating user spoolbuddy [preauth] until the daemon was restarted manually. Worse, every prior successful registration appended a fresh entry to authorized_keys without ever pruning the old one, so a typical device accumulated 5+ stale Bambuddy-tagged keys (each one a permanent backdoor for whichever Bambuddy keypair held the matching private half at the time it was deployed). Two-pronged fix: (1) the heartbeat response (HeartbeatResponse, routes/spoolbuddy.py:282) now carries the current ssh_public_key alongside the existing pending_command / calibration fields, so the daemon's heartbeat picks up a key rotation within one cycle instead of needing a service restart; the same try/except Exception: pass pattern as the registration response keeps a missing/unreadable backend key from breaking telemetry. (2) _deploy_ssh_key() in daemon/main.py now syncs rather than appends — it strips every line tagged bambuddy-spoolbuddy, writes the current key once, and is a no-op when already in sync (so it doesn't churn the file every heartbeat). User-managed entries (any line not tagged bambuddy-spoolbuddy) are preserved untouched. 5 new unit tests in spoolbuddy/tests/test_deploy_ssh_key.py (creates-when-missing → mode-600 file with the current key; pile-up-of-stale-keys → only current key remains, no growth; preserves-unrelated-user-keys → user's own SSH access untouched; idempotent-when-in-sync → no mtime change so heartbeat doesn't churn the file; swallows-write-errors → readonly-fs PermissionError doesn't crash the heartbeat loop). 2 new backend integration tests in test_spoolbuddy.py::TestDeviceEndpointstest_heartbeat_returns_ssh_public_key (response carries the key on every heartbeat) and test_heartbeat_ssh_key_failure_does_not_break_heartbeat (backend key-read failure leaves ssh_public_key: None but the heartbeat still 200s).
  • External-camera frames returned as black on go2rtc and other MJPEG sources (#1177, reported by nkm8) — _capture_mjpeg_frame returned the very first JPEG it found in the stream's bytes (backend/app/services/external_camera.py:282), but many MJPEG sources — go2rtc most notably, and several IP cameras — emit a "warm-up" frame on the byte that follows connection accept: usually the last keyframe held in the encoder, which is often black or stale until the encoder catches up to live content. Subsequent frames on the same connection are fine. The reporter saw it across snapshot UX, finish photos in notifications, and timelapse — every code path that opens a fresh capture connection (snapshot endpoint, [PHOTO-BG] finish photo, plate-detection CV, Obico ML inference, layer timelapse, Settings → Test). His own observation that go2rtc's /api/frame.jpeg (single-frame, internally already warmed) is never black while the first frame off /api/stream.mjpeg is, matched the hypothesis exactly. Support-bundle evidence was clean: every black notification frame in his log was 11095 bytes (a pure-black 1280×720 JPEG encodes to ~10–15 KB on standard libjpeg quality settings), while every captured-after-warm-up frame from the same source was 30–45 KB. Fix: read past the first frame and return the second; if the connection closes / times out / hits the 5 MB buffer cap before a second frame ever arrives, fall back to the first so callers still get something (degrading slow / single-frame streams to None would regress every code path that relied on pre-fix behaviour). The inner-loop now drains every complete frame already in the buffer before pulling the next chunk so high-FPS sources that pack multiple frames per chunk are handled correctly. The snapshot / rtsp / usb capture paths and the live-view streaming endpoint (generate_mjpeg_stream) are untouched. 7 new regression tests in test_external_camera.py::TestCaptureMjpegFrameWarmupSkip cover (a) two-frames-in-two-chunks → second returned, (b) two-frames-in-one-chunk → second returned, (c) frame split across chunk boundary → assembled correctly, (d) single-frame stream → first returned via fallback (no None regression), (e) timeout after first frame → first returned via fallback, (f) zero-frame stream → None, (g) non-200 status → None. Latency penalty: at most one frame interval (typically 50 ms – 1 s on a steady stream).
  • MakerWorld sidebar entry visible to every user regardless of group permissions (#1175) — Backend already enforced makerworld:view on every /makerworld/* route (backend/app/api/routes/makerworld.py:145, 157, 242, 406), the permission was correctly granted to the admin and standard-user role defaults (permissions.py:298, 364, 454), and the frontend Permission type union already included 'makerworld:view' | 'makerworld:import' (client.ts:2498) — but the sidebar's hand-maintained navPermissions map in Layout.tsx:278 had no entry for makerworld, so isHidden('makerworld') always returned false and the entry rendered for every authenticated user. Users without the permission saw the entry, clicked, and the page rendered while every API call inside it 403'd. Two-line fix: (1) Layout.tsx:278 — add makerworld: 'makerworld:view' to the map, matching every other sidebar entry's gating shape; (2) App.tsx:200 — wrap the route in <PermissionRoute permission="makerworld:view"> for defence in depth, so a user who knows the URL can no longer reach the page directly (matches the existing pattern on settings, groups/new, groups/:id/edit two lines below). 2 new Layout tests pin the contract: with auth enabled and a user lacking makerworld:view, the sidebar <a href="/makerworld"> link is absent (other links like /files still render); with the permission granted, the link renders.
  • Printer Info modal: serial-number and IP-address copy buttons silently did nothing on plain-HTTP LAN deployments (#1174, reported by BurntOutHylian) — PrinterInfoModal's CopyButton only tried navigator.clipboard.writeText(), which is gated by the secure-context requirement (HTTPS or localhost). On the typical Bambuddy deployment shape — bare-IP HTTP on the LAN — navigator.clipboard is undefined; the existing try/catch swallowed the resulting TypeError, the icon never flipped to the tick, and nothing landed on the user's clipboard. Fixed by adding the same off-screen-textarea + document.execCommand('copy') fallback that CameraTokensPage's plaintext-token modal already uses for plain-HTTP LAN deployments: gate on navigator.clipboard && window.isSecureContext, fall back to the legacy path otherwise, and surface the success-tick only when the copy actually landed (return early without flipping copied if execCommand('copy') returns false). The try/finally around the textarea guarantees DOM cleanup even when the browser throws on a restricted context. 3 new component tests in PrinterInfoModal.test.tsx cover (a) secure-context happy path uses navigator.clipboard.writeText, (b) plain-HTTP fallback path actually invokes execCommand('copy') and leaves no leaked textarea in the DOM, (c) finally cleanup removes the textarea even when execCommand throws synthetically. Thanks to BurntOutHylian for the precise file/line pointer in the report.
  • Queue auto-dispatched the next print onto a fouled bed after an aborted or cancelled print (#1171, reported by tom5677) — When a print ended with status aborted (printer self-abort, or a user stopping the print on the printer's own touchscreen) or cancelled (user stopping the print via the Bambuddy queue UI), the plate-clear gate added in #961 was not raised — only completed and failed triggered it (backend/app/main.py:2660). Result: the queue scheduler dispatched the next pending item ~2 seconds after the abort, with the previous print's material still on the bed. The reporter saw two prints (P1P + P1S) auto-start onto fouled beds within seconds of each other after touchscreen-aborts, and explicitly flagged the risk of damage to the printer; a third printer (his second P1S) behaved correctly because its previous print had ended completed. The original code's comment ("user-cancelled prints don't require a plate-clear ack — nothing printed on the bed") only holds if you cancel right at layer 1; cancelling a 12-hour print at hour 11 leaves a fouled bed too. Fix: the gate is now raised for every terminal status — completed, failed, aborted, cancelled — matching the safety contract that the user must acknowledge the bed is clear before any next queued print starts. The gate is user-clearable on the Printers page, so worst case for a layer-1 cancel the user clicks "Clear Plate" once. Touchscreen-aborts are particularly important to gate because Bambuddy's "user stopped via UI" override (_user_stopped_printersaborted mapped to cancelled) only fires when the user stops via the Bambuddy queue; a touchscreen-stop reports aborted straight through. Regression coverage in test_print_lifecycle.py::TestPlateClearGate: parametrised across all four terminal statuses (asserts set_awaiting_plate_clear(printer_id, True) is called for each), plus a defence-in-depth test that an unrecognised future status string never silently raises the gate.
  • Printer card always shows the first plate's thumbnail when printing a multi-plate 3MF (#1166, reported by smandon) — On printers running firmware that drops the plate path from print.gcode_file (the reporter's case: P1S 01.10.00.00, but the same shape appears on other firmware revisions), the printer reports gcode_file: MyModel.3mf instead of gcode_file: /Metadata/plate_4.gcode. The /printers/{id}/cover route's regex (plate_(\d+)\.gcode) found nothing in the bare .3mf filename, defaulted to plate 1, and the printer card showed Metadata/plate_1.png from the 3MF — even though the user dispatched plate 4. Same problem hit current_plate_id on the status response (printer card detail row showed plate 1). Two-pronged fix on a precedence ladder: (1) Bambuddy now records the plate it dispatchedstart_print() writes (dispatched_plate_id, dispatched_subtask) onto PrinterState at publish time, and a new resolve_plate_id(state) helper prefers that record over the gcode_file regex when dispatched_subtask == state.subtask_name (the subtask check rejects stale entries from a prior Bambuddy-dispatched print bleeding into a Studio-direct dispatch). (2) After the 3MF lands on disk, the cover route scans the zip for a unique Metadata/plate_*.gcode entry: per-plate archives sliced separately in Bambu Studio bundle thumbnails for every plate but only the active plate's gcode, so a single match unambiguously identifies the plate even when no Bambuddy dispatch exists (Studio-direct flow). Final fallback is plate 1, unchanged. The cover-byte cache key was also simplified — plate_num was removed from the key now that resolution is late-bound; clear_cover_cache() already runs on every print start, so different plates of the same project always re-fetch a fresh thumbnail. Coverage: 5 unit tests in test_printer_manager.py::TestResolvePlateId (dispatch precedence, stale-subtask guard, gcode regex fallback, default-1 path, missing-subtask guard), 4 unit tests in test_bambu_mqtt.py::TestStartPrintRecordsDispatchedPlate (dispatch record set/cleared/overwritten/skipped on disconnect), 2 integration tests in test_printers_api.py (dispatch wins over plate-1 default; 3MF-scan fallback for per-plate archive without dispatch). Studio-direct multi-plate prints (no dispatch record AND multiple plate gcodes in the 3MF) still default to plate 1 — matches the firmware's own ambiguity, not regressed by this change.
  • AMS slot configuration intermittently fails to reach the printer after several configs in a row (#1164, reported by RosdasHH) — Configuring AMS slots a handful of times (the reporter saw it almost every 6th change) would silently stop reaching the printer; ~1 minute later the filament colours on the printer would briefly jump between slots, then settle. Root cause was the zombie-session watchdog at bambu_mqtt.py:861 introduced for #887. When an ams_filament_setting response took >10 s (normal under load — concurrent K-profile fetches, busy printer, network jitter) the watchdog incremented an _ams_cmd_unanswered counter and zeroed _last_ams_cmd_time so it wouldn't re-trigger on the next status push. The bug: the response handler that reset the counter was guarded by and self._last_ams_cmd_time > 0 — so when the late response did arrive (after the watchdog had already zeroed the timer), the counter stayed armed at 1. The next slow response on any ams_filament_setting command — possibly minutes or hours later, on an entirely unrelated config attempt — would take the counter to 2 and trigger force_reconnect_stale_session(). The user-visible symptoms match exactly: configs stop landing (because MQTT reconnects mid-publish, dropping the in-flight command and surfacing as Cannot set AMS filament setting: not connected if the user retries during the ~1 min reconnect window), then the queued state finally lands when the reconnect completes (the "filament colours jumping around" the reporter described). Fix is to drop the _last_ams_cmd_time > 0 guard: any ams_filament_setting response — late or not — proves the channel is alive, so the counter must reset. Watchdog still trips on a real zombie session (no responses at all for two consecutive >10 s windows). Regression test in test_bambu_mqtt.py::TestZombieSessionDetection::test_late_response_after_watchdog_clears_counter_issue_1164 simulates the exact sequence (watchdog fires → late response arrives → second slow response on a fresh command) and asserts the counter resets to 0 on the late response and the second command doesn't tip the threshold to 2. Other 10 zombie-detection tests still pass unchanged.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.