Note
This is a daily beta build (2026-05-02). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Added
- AMS slot Load / Unload from the printer card (#891, reported by NNeerr00, +1 from cadtoolbox) — The MQTT primitives for "load filament from a tray" and "unload the currently loaded tray" already existed in
bambu_mqtt.py(reverse-engineered from BambuStudio captures, including the H2D dual-extruder right-external case captured fresh during this work) but were unused — there was no HTTP route and no UI. Net effect: every Load / Unload had to happen on the printer touchscreen, and external-spool users on dual-nozzle H2D had no way to drive Ext-R from the desktop at all. Backend: newPOST /printers/{id}/ams/load?tray_id={int}andPOST /printers/{id}/ams/unload, both gated onPermission.PRINTERS_CONTROL. The load route validatestray_id ∈ {0..15, 254, 255}(AMS slots, single-external/Ext-L, Ext-R respectively) and returns a human-readable target in the success message ("AMS 0 slot 1", "external spool", "Ext-R") so the UI toast tells the user which spool the printer is now feeding from. MQTT primitive update:ams_load_filamentgains a third encoding branch fortray_id=255matching the BambuStudio capture verbatim —ams_id=255, slot_id=0(the right-extruder index, not a slot index — Bambu's load command on dual-extruder externals encodes the destination extruder, not the source slot),target=255, andcurr_temp = tar_temp = right-nozzle temp(read fromstate.temperatures["nozzle_2"], falling back to 215 °C if the right nozzle is cold or unknown — the printer rejects nonsensical temps, so a warm fallback is safer than-1). The existingtray_id=254branch is preserved verbatim (slot_id=254, curr/tar=-1) since that came from a single-extruder capture and is known to work; no risk of regression on existing single-external setups. UI: the existing AMS slot popover (the one with "Re-read RFID") gains two new entries — "Load" (poststray_id = ams.id * 4 + slotIdx) and "Unload" (no params, global on the currently-loaded slot). The external spool slot — which had no popover at all before — gets one with the same Load + Unload entries, and on dual-nozzle H2D each external slot (Ext-L tray_id=254, Ext-R tray_id=255) drives its own extruder. The menu is hidden whilestate === 'RUNNING'(parallels the existing RFID re-read gating). i18n:printers.ams.load,printers.ams.unload, plus four new toast strings (loadInitiated,unloadInitiated,failedToLoad,failedToUnload) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 16 new tests pin the contract: 5 unit tests intest_bambu_mqtt.py::TestAmsLoadFilamentEncoding(AMS slot encoding, Ext-L preserves legacy capture, Ext-R uses the new captured shape with actual right-nozzle temp, Ext-R falls back to 215 °C when cold, disconnected client doesn't publish); 11 integration tests intest_printers_api.py::TestAMSLoadUnloadAPI(load: invalid tray_id 400, not-found 404, not-connected 400, AMS slot success with derivedams_id*4+slotmath, Ext-L success, Ext-R success, MQTT failure 500; unload: not-found, not-connected, success, MQTT failure 500); 4 frontend tests inPrintersPageAmsLoadUnload.test.tsx(Load posts the right tray_id, Unload posts with no params, menu hidden while RUNNING, external spool's tray_id=254 round-trips through the route). - API keys can read Bambu Cloud presets on the owner's behalf (#1182, reported by turulix) — Tim is building a fully automated headless slicing pipeline against Bambuddy's API and hit the wall flagged in the previous round of cloud-auth work (#665):
/cloud/*routes resolvecloud_tokenper-user fromUser.cloud_token, but the auth gate (require_permission_if_auth_enabled,auth.py:856) returnedNonefor API-keyed requests, so the route fell back to the globalSettings-table token, which only carries a value in auth-disabled deployments. Net effect on auth-enabled deployments: API keys reached the gate just fine, then/cloud/filamentsalways sawuser=None, calledget_stored_token(db, None)against an empty Settings table, and returned 401 / empty results — no path to read the slicer presets, filament catalogue, or device list that a CLI workflow needs. The data model treated API keys as standalone tokens with no owner (APIKeyhadid,name,key_hash, scope flags, andprinter_ids— nouser_id), so even if the gate wanted to delegate the cloud lookup, there was no User to delegate to. The fix: make API keys carry an owner, route /cloud/* lookups through that owner, and gate the new capability behind an explicit opt-in scope so existing automation doesn't gain cloud-read access on upgrade. Concretely: (1)APIKeygainsuser_id(FK tousers.id, ON DELETE CASCADE — Postgres enforces, SQLite plus an explicitDELETE FROM api_keys WHERE user_id = ?in the user-delete route since SQLite ships FK enforcement off; the project's existing pattern atusers.py:397-406forcreated_by_idcleanup) andcan_access_cloud(BOOLEAN DEFAULT 0 — opt-in, never set on legacy rows). (2) The auth gate now returns the owner User when it validates an API key withuser_idset, so/cloud/*routes naturally resolveuser.cloud_tokenthe same way they do for JWT-authed sessions. Permission semantics are preserved — API keys still bypass the per-route permission check (their scopes live on the row itself), the User return is only so cloud-aware routes can read per-user state. Legacy ownerless keys (user_id IS NULL) keep returning None, stay anonymous, and continue working against every non-cloud route exactly as before. (3) A router-level dependency on the/cloud/*APIRouterenforces three independent fences for API-keyed callers:user_id IS NOT NULL(legacy keys → 401 with "recreate it from Settings → API Keys" — explicit recreate path rather than silently degrading),can_access_cloud=True(otherwise 403 with "Enable 'Allow cloud access' on the key"), andbuild_authenticated_cloudreturning a service (otherwise 401 with the existing token-not-set error — unchanged for JWT flow). The router-level dep duplicates the API-key validation done by the regular auth gate (router-level deps run before route-level deps in FastAPI, sorequest.stateisn't populated yet) — the cost is one extraSELECT FROM api_keysper cloud request, bounded and cheap with thekey_prefixindex. (4) The create route stampsuser_id = current_user.idfrom the creator and rejectscan_access_cloud=Truewhen auth is disabled (no per-usercloud_tokenstorage exists in that mode — fail loudly at create time rather than silently producing a non-functional key). PATCH route rejects flippingcan_access_cloudto True on a legacy ownerless key for the same reason — force recreate. (5)APIKeyResponseexposesuser_idso the UI can show ownership at a glance: a "Cloud" badge for cloud-enabled keys and a "Legacy" badge with hover tooltip ("Created before per-user ownership; recreate to use cloud access") for ownerless rows. The form gains an "Allow cloud access" checkbox, default off. Migration: two idempotentALTER TABLE api_keys ADD COLUMN(user_id INTEGER REFERENCES users(id) ON DELETE CASCADEandcan_access_cloud BOOLEAN DEFAULT 0) plus an index onuser_idfor the auth-gate's owner→keys lookup that runs on every API-keyed request. i18n: 5 new keys (settings.cloudAccess,settings.cloudAccessDescription,settings.cloudBadge,settings.legacyKey,settings.legacyKeyTooltip) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copies pending native translation (matches the project's existing flow for newly-added user-facing features). 9 backend integration tests intest_api_key_cloud_access.py: create stamps owner + cloud flag, defaults off when not asked for, rejected when auth disabled (no per-user storage), PATCH rejected on legacy keys; cloud router rejects legacy keys with the recreate copy, rejects owned-but-no-cloud-flag keys with the enable-cloud-access copy, lets owned-and-flagged keys through with owner'scloud_tokenin the response, JWT callers unaffected (gate is no-op for non-API-keyed); user-delete CASCADEs the API keys via the explicit DELETE in the route. 2 frontend SettingsPage tests pin the badge rendering matrix (Cloud badge present oncan_access_cloud=true, Legacy badge present onuser_id=null, neither rendered on a normal owned non-cloud key) and the create-form contract (toggling "Allow cloud access" results incan_access_cloud=truein the POST body). Permission semantics for the new fence are the only behavioural change for existing API keys: keys created before this release become "legacy" rows and are rejected at /cloud/* with the recreate message; every other endpoint they were used against — queue, status, control — is untouched. - Home Assistant addon detection — Settings → Updates and the in-app update banner now defer to the HA Supervisor (#1167, reported by Spegeli) — Bambuddy already shipped
HA_URL/HA_TOKENenv-var support specifically labelled "for HA Add-on deployments" (#283) and a community-maintained HA addon (hobbypunk90/homeassistant-addon-bambuddy) exists upstream, so an HA-supervised installation is a real first-class deployment shape. Until now though, the update UI didn't know about it: HA addon users got the same "Update available!" banner as everyone else and, if they clicked through to Settings, saw the docker-compose snippet ("docker compose pull && docker compose up -d") which they cannot run from inside an HA addon container — that's the Supervisor's job. Detection uses the canonical signal: HA Supervisor injectsSUPERVISOR_TOKENinto every addon container, and that variable is not set in any other environment. A new_is_ha_addon()helper inbackend/app/api/routes/updates.pyflips a request-level boolean which/updates/checksurfaces asis_ha_addon: bool+ an extendedupdate_method: 'git' | 'docker' | 'ha_addon'enum. The check is checked before Docker on/updates/applybecause HA addons are Docker containers — checking docker first would mis-classify them and serve the wrong message; the response also keepsis_docker: truealongsideis_ha_addon: trueso older frontend bundles still hit a managed-deployment branch (degrading to the Docker UX) instead of rendering an in-app Install button that can't work. Frontend branches identically:SettingsPage.tsx's update card checksis_ha_addonfirst and renders "Updates are managed by the Home Assistant Supervisor. Open Settings → Add-ons → Bambuddy in Home Assistant to install the new version." in place of the docker-compose hint;Layout.tsx's update banner is suppressed entirely for HA addons since the HA Supervisor's own update notification already surfaces the new version natively in the HA UI and a duplicate Bambuddy banner would just be noise that links to a page that says "go to HA". Plain Docker deployments are unaffected — the existing docker-compose hint and the in-app banner still render the same way they did. Localised across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) with full translations of the newsettings.updateViaHomeAssistantstring. 6 new tests pin the contract: 3 backend unit tests for_is_ha_addon()(env var present → true, absent → false, empty string treated as unset to guard against shells that export it empty), 1 backend integration test for the HA-precedes-Docker rejection on/updates/apply(asserts the message says "Home Assistant" and not "Docker Compose"), 2 backend integration tests for/updates/checkcovering the HA-addon branch (update_method == "ha_addon", both flags true) and the plain-Docker branch (is_ha_addon: false,update_method == "docker"); 2 frontend SettingsPage tests pin the mutually-exclusive UI rendering (HA branch shows the HA copy and not the docker-compose snippet; Docker branch shows the snippet and not the HA copy, neither shows the Install button); 2 frontend Layout tests pin the banner suppression for HA and its retention for plain Docker. - OIDC auto-created users now get readable usernames and land in a configurable group (#1173) — Two improvements to the OIDC auto-create flow: (1) Username derivation: Bambuddy now derives the username from
preferred_username, thenname, before falling back to the opaqueprovider_sub[:30]. Each candidate is sanitized independently — alphanumeric plus./-/_, whitespace collapsed, deduplication suffix appended on collision — so a value that strips to empty (e.g."!!!") correctly falls through to the next option rather than silently producing"oidcuser". (2) Default group: each OIDC provider gains adefault_group_idfield. When set, auto-created users are placed in that group; when unset, the existing "Viewers" fallback is preserved, so behaviour is unchanged for existing deployments. The column is nullable withON DELETE SET NULL; SQLite does not enforce FK constraints here, so a deleted configured group falls through to Viewers at runtime.default_group_idis validated on create/update (422 on a non-existent group). Exposed in the OIDC settings form as a group dropdown. Limitation: to clear a configured default group, delete the group or select a different one — explicit reset-to-null is not currently supported. - Filament Track Switch (FTS) support — print modal filament dropdown is no longer empty when an X2D / H2D has the FTS accessory installed (#1162, reported by mkavalecz) — When the FTS accessory is installed the printer's MQTT changes one nibble of the per-AMS
infobitmask: bits 8-11 flip from a fixed extruder ID (0x0 / 0x1) to0xE("uninitialized"), because the AMS is no longer wired to a single nozzle — the FTS dynamically routes any slot to either extruder. Bambuddy's MQTT parser already skipped 0xE entries when buildingams_extruder_map(matching BambuStudio's reading for boot-time transient state), so with the FTS installed the map ended up empty and the print modal's filament dropdown — which filters byextruderId === nozzle_idto prevent cross-nozzle assignment ("position of left hotend is abnormal" failures) — filtered out every loaded slot. Net effect: empty Filament Mapping dropdown on every dual-nozzle print with the FTS, even when the AMS was fully loaded with the right material. Detection comes from a new MQTT field —print.device.fila_switch— which is non-null only when the accessory is installed; it carries the routing topology as two arrays:in[track] = currently fed slot (-1 = empty)andout[track] = extruder this track terminates at. The fix surfaces this through a newFilaSwitchStatedataclass onPrinterState(installed,in_slots,out_extruders,stat,info) and the equivalentFilaSwitchResponsePydantic schema on theGET /printers/{id}/statusroute. Frontend (useFilamentMapping.ts+FilamentMapping.tsx) skips the per-extruder filter whenprinterStatus.fila_switch?.installed === trueso any compatible AMS slot can satisfy any nozzle's filament requirement, since the FTS handles the routing. Slots currently fed into a track also get a routing badge in the dropdown —[L]or[R]— so the user can tell at a glance which slot the FTS is currently routing where (idle slots get no badge: they can be routed to either extruder on demand). The hard "no cross-nozzle assignment" filter on real dual-nozzle printers without the FTS stays untouched (still trips the same way it always has —fila_switch == nullkeeps the existing behaviour). 4 backend tests intest_bambu_mqtt.py::TestFilamentTrackSwitchDetection(default-not-installed, detect-from-MQTT-using-the-reporter's-bundle, no-fila_switch-field-stays-not-installed, missing-in-out-arrays-don't-crash) and 2 frontend tests inuseFilamentMapping.test.ts(FTS-active drops the nozzle filter; explicitfila_switch: nullkeeps the filter applied). Upstream fila_switch payloads with anything other than the documented shape are tolerated —installedflips on the presence of the field, the routing arrays default to empty lists if missing, and the dropdown skips the badge for slots not currently inin_slots.
Fixed
- iframe embedding from trusted origins (e.g. Home Assistant Webpage panel) no longer blocked (#1191, reported by azurusnova) — Bambuddy ships strict anti-clickjacking headers (
X-Frame-Options: SAMEORIGINand CSPframe-ancestors 'none') by default, which protects internet-exposed deployments from being embedded by hostile sites. But it also broke a documented integration path: Home Assistant's Webpage dashboard panel embeds Bambuddy via<iframe>on a different origin (HA on:8123, Bambuddy on:8000), and the SAMEORIGIN value is port-strict, so even same-LAN trusted setups got "refused to connect". A newTRUSTED_FRAME_ORIGINSenv var takes a comma-separated list ofscheme://host[:port]origins; when set, the middleware dropsX-Frame-Options(modern browsers honorframe-ancestors, and the legacyALLOW-FROM <url>syntax is deprecated and inconsistent across vendors) and the CSPframe-ancestorsdirective becomes'self' <origin> <origin>.... The default — empty env var — keeps the strict'none'behavior, so Docker / bare-metal users without HA see no behavioural change. Origin validation happens at startup: onlyhttp://andhttps://are accepted, paths/query/fragments/wildcards are rejected with a warning (one bad entry doesn't take the deployment down — it's just dropped from the allowlist). Thegcode-viewerroute'sframe-ancestors 'self'(same-origin embed for the in-app gcode preview iframe) also includes the allowlist when configured, so HA users embedding Bambuddy can still open the gcode viewer modal. 16 new tests intest_security_headers.py: 12 unit tests for the env-var parser (empty / unset / single / multiple / whitespace / empty-segment / non-http scheme dropped / missing host dropped / path dropped / query+fragment dropped / wildcard dropped / trailing-slash kept) and 4 integration tests for the middleware (default-strict emits SAMEORIGIN + 'none', allowlist relaxes CSP and drops X-Frame-Options, /docs branch also honors the allowlist, other security headers like X-Content-Type-Options and Referrer-Policy are unaffected in both modes). Documented in the Docker env-var reference page on the wiki and in.env.example. - Virtual Printer queue mode auto-dispatched onto the wrong colour when multiple compatible printers were available (#1188, reported by EdwardChamberlain) — Sending a sliced 3MF to a queue-mode VP via Orca / Studio with auto-dispatch on caused Bambuddy to schedule the job onto a printer of the right model but the wrong loaded filament: a print sliced for matte white PLA would land on a printer with no white loaded, and the printer would start the job using whatever was the closest available match. Edward's diagnosis was exact (
virtual_printer/manager.py:325-326): the manual /api/v1/print-queue/ POST flow extracts the 3MF's per-slot filament requirements at queue-add time and writesrequired_filament_types,filament_overrides, andams_mappingon the resultingPrintQueueItem, so the scheduler's color-match enforcement (print_scheduler.py:512— keys onfilament_overrides[].force_color_match === true) actually runs. The VP queue-write path (_add_to_print_queue) skipped all of that and built a barePrintQueueItemwith onlyprinter_id,target_model,archive_id,plate_id,position,status,manual_start. Net effect: the scheduler reached the model-only-matching fallback and accepted the first available printer of the target model regardless of loaded colour, exactly as he described. Fix: the scheduler's existing_get_filament_requirements3MF parser is extracted into a shared helper (backend/app/services/filament_requirements.py:extract_filament_requirements) so the VP path can reuse it at upload time. The VP's_add_to_print_queuenow calls that helper after archiving and populatesrequired_filament_typesunconditionally (cheap; helps the scheduler reject obvious type mismatches even withoutforce_color_match); and writesfilament_overrideswithforce_color_match: trueper consumed slot when a new per-VP settingqueue_force_color_matchis on. Default is off to preserve current behaviour for upgraders — a fresh-install user who wants the bug-free behaviour flips the toggle once on the VP card; an existing user gets exactly the model-only-matching they had before until they opt in. Auto-dispatch onto the wrong material happens loudly enough that anyone affected can find the toggle. Why default-off rather than default-on: existing automation that relies on "send to queue VP, get printed somewhere" without caring about colour shouldn't silently start blocking on colour matching after an upgrade. The toggle has clear UI copy (virtualPrinter.queueForceColorMatch) explaining the trade-off. Defence in depth: a malformed or unparseable 3MF (e.g. fake bytes from a misconfigured upload tool) leaves both fields None and the scheduler falls back to model-only matching, matching pre-fix behaviour for the unhappy path. The scheduler itself is unchanged — it already handledforce_color_matchcorrectly when the field was populated; the bug was purely the VP path not populating it. Schema: one nullable columnvirtual_printers.queue_force_color_match BOOLEAN DEFAULT 0/FALSE(Postgres-safe) added via the existing_safe_executemigration pattern. API:VirtualPrinterCreateandVirtualPrinterUpdatePydantic schemas +_vp_to_dictresponse shape carryqueue_force_color_match, the create + update routes wire it through to the model, andVirtualPrinterInstanceconstructor +multiVirtualPrinterApiTypeScript client mirror the field. UI: new toggle onVirtualPrinterCardrendered only whenmode === 'print_queue'(parallels the existingauto_dispatchtoggle's mode-gating), withpendingActionstate for the in-flight indicator. i18n: newvirtualPrinter.queueForceColorMatch.{title,description}keys in all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 11 new tests: 8 intest_filament_requirements.pycovering the extracted parser end-to-end (per-slot dicts, zero-use slots filtered, plate filtering, no-plate flat-walk fallback, unparseable / missing / config-less files, sorted output); 3 intest_virtual_printer.py::TestVirtualPrinterInstancecovering the VP write path (setting-off → onlyrequired_filament_typespopulated; setting-on →filament_overridespopulated withforce_color_match: trueper slot; unparseable 3MF → both fields None, no crash). Existing scheduler tests still pass against the refactored helper (verified end-to-end across the scheduler / virtual_printer / print_queue / filament test suites — 479 tests). Edward's "out of scope nice-to-have" suggestion of a "Requires Color Match" pill on queue cards is deferred to a follow-up so this PR stays scoped to his repro. - Slicing a library file via API key fails with "no Bambu Cloud session is stored" even when the key has cloud access (#1182 follow-up, reported by turulix) — Tim shipped the headless slicing pipeline #1182 was filed for, then hit a second wall:
GET /api/v1/cloud/settingsreturned the cloud preset IDs correctly (the/cloud/*router-level gate from #1182 was doing its job), butPOST /api/v1/library/files/{id}/slicewith those IDs in the request body failed the slice job witherror_status: 400, error_detail: "Cloud preset selected for printer, but no Bambu Cloud session is stored. Sign in to Bambu Cloud and retry."Cause: the/cloud/*fix routes the API key's owner User throughcloud_caller(a router-level gate stashes the owner onrequest.state.api_key_owner, route-level deps pull it back out), but the slice route lives on/library/*— different router, no gate, so when the auth dep returnedNonefor the API-keyed request the slice route passedcurrent_user_id=Nonestraight through to_run_slicer_with_fallback→_resolve_cloud(db, user=None)→get_stored_token(db, None), which falls back to the auth-disabled globalSettingstable. That table is empty in auth-enabled deployments, so cloud preset resolution failed even though the key's owner User had a perfectly validcloud_tokenon their User row. Fix is a new route-level depresolve_api_key_cloud_ownerincloud.pythat's permissive (returns the owner User if the key hascan_access_cloud=true, otherwise None — never raises) so it can be safely added to non-/cloud/*routes without breaking the local-presets path: a request with an API key that lacks the cloud scope still slices fine against local presets, and only fails with the existing "no Bambu Cloud session" error if it actually selects a cloud preset. Wired intoPOST /library/files/{id}/slice(Tim's blocker) andGET /slicer/presets(the SliceModal preset dropdown source — same root cause, would have hit anyone using the UI through an API-keyed reverse proxy). Both routes now resolve the cloud-token owner viacurrent_user or api_key_cloud_ownerinstead ofcurrent_user.id if current_user else None. The auth gate's None-return for API keys is unchanged — keeping that fix scoped to the routes that actually need cloud-token resolution prevents accidental scope creep into other routes that fence oncurrent_user is None. 4 new integration tests intest_api_key_cloud_access.py::TestSliceRouteCloudOwnerResolutionpin the dep contract: returns the owner for a key withcan_access_cloud=Trueand a valid owner; returns None for an owned key without the cloud scope (so cloud presets still 400 cleanly, local presets still slice); returns None for legacy ownerless keys; no-op for JWT and anonymous callers. - Project cover photo thumbnail too small to recognise the print (#1155 follow-up, reported by smandon) — The 40×40 thumbnail smandon's MakerWorld download workflow relied on for "is this the model I'm looking for?" wasn't readable at that size; he asked for either a larger thumbnail or a click-to-enlarge full preview. Enlarging the thumbnail itself would shift the card layout and cost the dense grid he chose to use for browsing many projects, so the fix keeps the 40×40 thumbnail and shows a portal-mounted 384×384 popover on hover. The popover renders the full image in
object-containso tall portrait MakerWorld photos aren't cropped to a square, haspointer-events-noneso it can't intercept hover and create a flicker loop, andz-[100]so it stacks above every sibling card in the grid. Why a portal: ProjectCard carriesoverflow-hidden(for its rounded-corner clipping and the color accent bar), so an in-tree popover gets clipped by the card the moment it extends past the card's bounds — exactly the cut-off behaviour smandon reported on the second iteration. Rendering viacreatePortal(..., document.body)escapes every ancestor clipping context, andposition: fixedwith measurements fromgetBoundingClientRect()keeps the popover pinned next to the thumbnail regardless of where the card sits in the grid. Edge handling: if the thumbnail is near the viewport's right edge the popover flips to the LEFT side of the thumbnail; vertical position is clamped so the popover never overflows the window top or bottom. The thumbnail's ownonClickisstopPropagation'd so hovering the popover area never accidentally triggers the parent card's "open project" navigation. 2 new tests inProjectsPage.test.tsxpin the contract: hovering mounts the popover at document.body level (not nested in the card — a future refactor that drops the portal would re-introduce the clipping bug, and the test catches that); leaving unmounts it; the popover img points at the same cover-image URL as the small thumbnail withobject-contain; cards without a cover_image_filename never mount the portal-rendering component (so a hover doesn't flash an empty preview). - Spool edit form lost the Extra Colours value on reopen, Dual Color rendered identically to Gradient, and the Sparkle / checkerboard visuals were too subtle (#1154 follow-up, reported by maugsburger) — Four issues against the multi-colour swatch work that landed for #1154. (1) Extra Colours input didn't hydrate on edit reopen:
ColorSection's draft buffer was seeded once viauseState(formData.extra_colors), butSpoolFormModalopens before its ownuseEffectpopulatesformDatafrom the spool record — so by the time the saved value landed, the input's local state had already been initialised to''and never re-synced. The COLOR preview banner above the input rendered correctly (consumes formData directly), making it obvious the data WAS persisted; only the input was stuck blank, which the user then had to retype to save anything else. Fix: a ref-guardeduseEffectresyncsextraColorsDraftwhenformData.extra_colorschanges via an external update (e.g. modal opening with a spool); the ref is updated insidecommitExtraColorsso the user's own typing is round-tripped without the resync clobbering it. (2)Dual ColorandGradientproduced the same diagonal blend:buildColorLayerinfilamentSwatchHelpers.tsran the samelinear-gradient(135deg, ...)for both effect types, so a "Dual Color" spool was visually indistinguishable from a "Gradient" one. Real dual-colour spools have two distinct bars on the reel — that's the whole point of the variant. Fix: wheneffect_typeisdual-colorortri-color, build the colour layer aslinear-gradient(to right, c1 0% X%, c2 X% Y%, ...)with CSS double-position stops (so the colour change is a hard line rather than a blend region) and equal-width segments across the stops;gradientkeeps the original 135° smooth blend. The existingmulticolorconic-gradient path is untouched. (3) Sparkle effect was almost invisible on card-sized swatches: the original 4-dot pattern (each ~1px) read fine on the small inline swatch but disappeared on the 60-pixel-tall inventory card banners — exactly where the user actually identifies a spool. Bumped to 13 flecks in mixed sizes (1px / 1.5px / 2px) and varying opacity (0.65 → 1.0) to give a depth-of-field "metal flake" feeling, distinct from solid + multi-colour. (4) Checkerboard cell density scaled with the swatch: the previous helper putrepeating-conic-gradient(...)in thebackground-imageand the caller appliedbackground-size: cover, so the same 4-cell pattern was either tiny squares on a small swatch or four huge squares on a card-sized banner. MadebuildFilamentBackground()return{ backgroundImage, backgroundSize }with per-layer sizes — painted layers staycover, the checkerboard gets a fixed 12px tile so the cell density stays consistent regardless of element size and clearly reads as a transparency indicator rather than a multi-colour stripe. Updated the three existing call sites (InventoryPagegroup banner + spool card,ColorSectionpreview) to spread the returned style object directly. 8 new frontend tests cover the four fixes: hard-split contract for Dual/Tri Color (3 tests + 1 regression guard that Dual ≠ Gradient for the same stops); Sparkle prominence (≥ 10 distinct radial-gradient layers in the rendered background); checkerboard density (lastbackgroundSizelayer is a fixed pixel value, notcover); 4 hydration tests pinning the input restore path (fills when formData arrives via parent update, resyncs when the spool changes mid-form, doesn't clobber live user typing, clears when the new spool has no extra_colors). - Pending review card and the resulting archive name disagreed;
.gcode.3mffilename suffix wasn't fully stripped (#1152 follow-up, reported by smandon) — Two distinct holes in the original #1152 fix surfaced when smandon retested on the daily build. (1) Suffix stripping was incomplete: Bambu Studio's "Send to printer" dialog typically writes files likePlate_1.gcode.3mf(a sliced gcode payload wrapped in a 3MF container), but the archive's display stem was computed viaPath(name).stem, which only drops the last suffix and left the user staring atPlate_1.gcodein the archive UI. (2) The review card and the archive disagreed on what the print was called: the pending-uploads panel always rendered the raw FTP filename, while the eventualPrintArchive.print_nameresolved from the 3MF's embedded title (or, with the toggle onfilename, the filename stem). Net effect: the user sawPlate_1.gcodein the review card andSome Creator's Titlein the archive grid for the same item, with no toggle that flipped both views in lockstep. Fix has three pieces: a newresolve_display_stem()helper inarchive.pythat strips.gcode.3mf/.3mf/.gcode(case-insensitive) so both the archive and the review-side normalisation produce the same canonical stem; a newPendingUpload.metadata_print_namecolumn populated at FTP-receive time by peeking at the 3MF's embedded title (so/pending-uploads/list calls don't have to reopen every 3MF on every render); and a newPendingUploadResponse.display_namecomputed field that mirrorsarchive_print's exact precedence —filenametoggle: stripped stem;metadatatoggle (default): cached title or stripped stem. Frontend'sPendingUploadsPanelreadsupload.display_name(withupload.filenameas a defensive fallback for any pre-migration row), and the raw filename is exposed as a tooltip so users can still inspect what actually arrived over FTP. Migration is one idempotentALTER TABLE pending_uploads ADD COLUMN metadata_print_name VARCHAR(255)(Postgres/SQLite-safe); existing pending rows have NULL there and gracefully fall back to filename-stem behaviour. 14 unit tests pin the stripping rules (Plate_1.gcode.3mf→Plate_1, mixed case, dots in the middle, edge.3mf-only /.gcode-only, full-path inputs); 6 integration tests pin the response contract (default toggle uses metadata title when present, falls back to stripped stem when absent,filenametoggle overrides metadata,filenametoggle still strips the double suffix,GET /{id}exposes the same field, whitespace-only metadata behaves like absent); 3 frontend tests pin the review card's render path (resolved name shown, fallback to filename when display_name is empty, raw filename available via tooltip). - SpoolBuddy SSH update fails with "permission denied for user spoolbuddy" after Bambuddy keypair rotation (reported during user testing) — Bambuddy's data dir at
<DATA_DIR>/spoolbuddy/ssh/can get recreated outside the daemon's control (volume remount, container recreate, fresh deploy), at which pointget_or_create_keypair()generates a new ed25519 keypair. The SpoolBuddy daemon previously only fetched and deployed Bambuddy's public key at registration time (/devices/register), so any rotation after a successful registration left the device's~/.ssh/authorized_keyspointing at a defunct public half — every "Update" click from the Bambuddy UI then failed withConnection closed by authenticating user spoolbuddy [preauth]until the daemon was restarted manually. Worse, every prior successful registration appended a fresh entry toauthorized_keyswithout ever pruning the old one, so a typical device accumulated 5+ stale Bambuddy-tagged keys (each one a permanent backdoor for whichever Bambuddy keypair held the matching private half at the time it was deployed). Two-pronged fix: (1) the heartbeat response (HeartbeatResponse,routes/spoolbuddy.py:282) now carries the currentssh_public_keyalongside the existingpending_command/ calibration fields, so the daemon's heartbeat picks up a key rotation within one cycle instead of needing a service restart; the sametry/except Exception: passpattern as the registration response keeps a missing/unreadable backend key from breaking telemetry. (2)_deploy_ssh_key()indaemon/main.pynow syncs rather than appends — it strips every line taggedbambuddy-spoolbuddy, writes the current key once, and is a no-op when already in sync (so it doesn't churn the file every heartbeat). User-managed entries (any line not taggedbambuddy-spoolbuddy) are preserved untouched. 5 new unit tests inspoolbuddy/tests/test_deploy_ssh_key.py(creates-when-missing → mode-600 file with the current key; pile-up-of-stale-keys → only current key remains, no growth; preserves-unrelated-user-keys → user's own SSH access untouched; idempotent-when-in-sync → no mtime change so heartbeat doesn't churn the file; swallows-write-errors → readonly-fs PermissionError doesn't crash the heartbeat loop). 2 new backend integration tests intest_spoolbuddy.py::TestDeviceEndpoints—test_heartbeat_returns_ssh_public_key(response carries the key on every heartbeat) andtest_heartbeat_ssh_key_failure_does_not_break_heartbeat(backend key-read failure leavesssh_public_key: Nonebut the heartbeat still 200s). - External-camera frames returned as black on go2rtc and other MJPEG sources (#1177, reported by nkm8) —
_capture_mjpeg_framereturned the very first JPEG it found in the stream's bytes (backend/app/services/external_camera.py:282), but many MJPEG sources — go2rtc most notably, and several IP cameras — emit a "warm-up" frame on the byte that follows connection accept: usually the last keyframe held in the encoder, which is often black or stale until the encoder catches up to live content. Subsequent frames on the same connection are fine. The reporter saw it across snapshot UX, finish photos in notifications, and timelapse — every code path that opens a fresh capture connection (snapshot endpoint,[PHOTO-BG]finish photo, plate-detection CV, Obico ML inference, layer timelapse, Settings → Test). His own observation that go2rtc's/api/frame.jpeg(single-frame, internally already warmed) is never black while the first frame off/api/stream.mjpegis, matched the hypothesis exactly. Support-bundle evidence was clean: every black notification frame in his log was 11095 bytes (a pure-black 1280×720 JPEG encodes to ~10–15 KB on standard libjpeg quality settings), while every captured-after-warm-up frame from the same source was 30–45 KB. Fix: read past the first frame and return the second; if the connection closes / times out / hits the 5 MB buffer cap before a second frame ever arrives, fall back to the first so callers still get something (degrading slow / single-frame streams to None would regress every code path that relied on pre-fix behaviour). The inner-loop now drains every complete frame already in the buffer before pulling the next chunk so high-FPS sources that pack multiple frames per chunk are handled correctly. Thesnapshot/rtsp/usbcapture paths and the live-view streaming endpoint (generate_mjpeg_stream) are untouched. 7 new regression tests intest_external_camera.py::TestCaptureMjpegFrameWarmupSkipcover (a) two-frames-in-two-chunks → second returned, (b) two-frames-in-one-chunk → second returned, (c) frame split across chunk boundary → assembled correctly, (d) single-frame stream → first returned via fallback (no None regression), (e) timeout after first frame → first returned via fallback, (f) zero-frame stream → None, (g) non-200 status → None. Latency penalty: at most one frame interval (typically 50 ms – 1 s on a steady stream). - MakerWorld sidebar entry visible to every user regardless of group permissions (#1175) — Backend already enforced
makerworld:viewon every/makerworld/*route (backend/app/api/routes/makerworld.py:145, 157, 242, 406), the permission was correctly granted to the admin and standard-user role defaults (permissions.py:298, 364, 454), and the frontendPermissiontype union already included'makerworld:view' | 'makerworld:import'(client.ts:2498) — but the sidebar's hand-maintainednavPermissionsmap inLayout.tsx:278had no entry formakerworld, soisHidden('makerworld')always returned false and the entry rendered for every authenticated user. Users without the permission saw the entry, clicked, and the page rendered while every API call inside it 403'd. Two-line fix: (1)Layout.tsx:278— addmakerworld: 'makerworld:view'to the map, matching every other sidebar entry's gating shape; (2)App.tsx:200— wrap the route in<PermissionRoute permission="makerworld:view">for defence in depth, so a user who knows the URL can no longer reach the page directly (matches the existing pattern onsettings,groups/new,groups/:id/edittwo lines below). 2 new Layout tests pin the contract: with auth enabled and a user lackingmakerworld:view, the sidebar<a href="/makerworld">link is absent (other links like/filesstill render); with the permission granted, the link renders. - Printer Info modal: serial-number and IP-address copy buttons silently did nothing on plain-HTTP LAN deployments (#1174, reported by BurntOutHylian) —
PrinterInfoModal'sCopyButtononly triednavigator.clipboard.writeText(), which is gated by the secure-context requirement (HTTPS or localhost). On the typical Bambuddy deployment shape — bare-IP HTTP on the LAN —navigator.clipboardis undefined; the existingtry/catchswallowed the resultingTypeError, the icon never flipped to the tick, and nothing landed on the user's clipboard. Fixed by adding the same off-screen-textarea +document.execCommand('copy')fallback thatCameraTokensPage's plaintext-token modal already uses for plain-HTTP LAN deployments: gate onnavigator.clipboard && window.isSecureContext, fall back to the legacy path otherwise, and surface the success-tick only when the copy actually landed (return early without flippingcopiedifexecCommand('copy')returns false). Thetry/finallyaround the textarea guarantees DOM cleanup even when the browser throws on a restricted context. 3 new component tests inPrinterInfoModal.test.tsxcover (a) secure-context happy path usesnavigator.clipboard.writeText, (b) plain-HTTP fallback path actually invokesexecCommand('copy')and leaves no leaked textarea in the DOM, (c)finallycleanup removes the textarea even whenexecCommandthrows synthetically. Thanks to BurntOutHylian for the precise file/line pointer in the report. - Queue auto-dispatched the next print onto a fouled bed after an aborted or cancelled print (#1171, reported by tom5677) — When a print ended with status
aborted(printer self-abort, or a user stopping the print on the printer's own touchscreen) orcancelled(user stopping the print via the Bambuddy queue UI), the plate-clear gate added in #961 was not raised — onlycompletedandfailedtriggered it (backend/app/main.py:2660). Result: the queue scheduler dispatched the next pending item ~2 seconds after the abort, with the previous print's material still on the bed. The reporter saw two prints (P1P + P1S) auto-start onto fouled beds within seconds of each other after touchscreen-aborts, and explicitly flagged the risk of damage to the printer; a third printer (his second P1S) behaved correctly because its previous print had endedcompleted. The original code's comment ("user-cancelled prints don't require a plate-clear ack — nothing printed on the bed") only holds if you cancel right at layer 1; cancelling a 12-hour print at hour 11 leaves a fouled bed too. Fix: the gate is now raised for every terminal status —completed,failed,aborted,cancelled— matching the safety contract that the user must acknowledge the bed is clear before any next queued print starts. The gate is user-clearable on the Printers page, so worst case for a layer-1 cancel the user clicks "Clear Plate" once. Touchscreen-aborts are particularly important to gate because Bambuddy's "user stopped via UI" override (_user_stopped_printers→abortedmapped tocancelled) only fires when the user stops via the Bambuddy queue; a touchscreen-stop reportsabortedstraight through. Regression coverage intest_print_lifecycle.py::TestPlateClearGate: parametrised across all four terminal statuses (assertsset_awaiting_plate_clear(printer_id, True)is called for each), plus a defence-in-depth test that an unrecognised future status string never silently raises the gate. - Printer card always shows the first plate's thumbnail when printing a multi-plate 3MF (#1166, reported by smandon) — On printers running firmware that drops the plate path from
print.gcode_file(the reporter's case: P1S 01.10.00.00, but the same shape appears on other firmware revisions), the printer reportsgcode_file: MyModel.3mfinstead ofgcode_file: /Metadata/plate_4.gcode. The/printers/{id}/coverroute's regex (plate_(\d+)\.gcode) found nothing in the bare.3mffilename, defaulted to plate 1, and the printer card showedMetadata/plate_1.pngfrom the 3MF — even though the user dispatched plate 4. Same problem hitcurrent_plate_idon the status response (printer card detail row showed plate 1). Two-pronged fix on a precedence ladder: (1) Bambuddy now records the plate it dispatched —start_print()writes(dispatched_plate_id, dispatched_subtask)ontoPrinterStateat publish time, and a newresolve_plate_id(state)helper prefers that record over the gcode_file regex whendispatched_subtask == state.subtask_name(the subtask check rejects stale entries from a prior Bambuddy-dispatched print bleeding into a Studio-direct dispatch). (2) After the 3MF lands on disk, the cover route scans the zip for a uniqueMetadata/plate_*.gcodeentry: per-plate archives sliced separately in Bambu Studio bundle thumbnails for every plate but only the active plate's gcode, so a single match unambiguously identifies the plate even when no Bambuddy dispatch exists (Studio-direct flow). Final fallback is plate 1, unchanged. The cover-byte cache key was also simplified —plate_numwas removed from the key now that resolution is late-bound;clear_cover_cache()already runs on every print start, so different plates of the same project always re-fetch a fresh thumbnail. Coverage: 5 unit tests intest_printer_manager.py::TestResolvePlateId(dispatch precedence, stale-subtask guard, gcode regex fallback, default-1 path, missing-subtask guard), 4 unit tests intest_bambu_mqtt.py::TestStartPrintRecordsDispatchedPlate(dispatch record set/cleared/overwritten/skipped on disconnect), 2 integration tests intest_printers_api.py(dispatch wins over plate-1 default; 3MF-scan fallback for per-plate archive without dispatch). Studio-direct multi-plate prints (no dispatch record AND multiple plate gcodes in the 3MF) still default to plate 1 — matches the firmware's own ambiguity, not regressed by this change. - AMS slot configuration intermittently fails to reach the printer after several configs in a row (#1164, reported by RosdasHH) — Configuring AMS slots a handful of times (the reporter saw it almost every 6th change) would silently stop reaching the printer; ~1 minute later the filament colours on the printer would briefly jump between slots, then settle. Root cause was the zombie-session watchdog at
bambu_mqtt.py:861introduced for #887. When anams_filament_settingresponse took >10 s (normal under load — concurrent K-profile fetches, busy printer, network jitter) the watchdog incremented an_ams_cmd_unansweredcounter and zeroed_last_ams_cmd_timeso it wouldn't re-trigger on the next status push. The bug: the response handler that reset the counter was guarded byand self._last_ams_cmd_time > 0— so when the late response did arrive (after the watchdog had already zeroed the timer), the counter stayed armed at 1. The next slow response on anyams_filament_settingcommand — possibly minutes or hours later, on an entirely unrelated config attempt — would take the counter to 2 and triggerforce_reconnect_stale_session(). The user-visible symptoms match exactly: configs stop landing (because MQTT reconnects mid-publish, dropping the in-flight command and surfacing asCannot set AMS filament setting: not connectedif the user retries during the ~1 min reconnect window), then the queued state finally lands when the reconnect completes (the "filament colours jumping around" the reporter described). Fix is to drop the_last_ams_cmd_time > 0guard: anyams_filament_settingresponse — late or not — proves the channel is alive, so the counter must reset. Watchdog still trips on a real zombie session (no responses at all for two consecutive >10 s windows). Regression test intest_bambu_mqtt.py::TestZombieSessionDetection::test_late_response_after_watchdog_clears_counter_issue_1164simulates the exact sequence (watchdog fires → late response arrives → second slow response on a fresh command) and asserts the counter resets to 0 on the late response and the second command doesn't tip the threshold to 2. Other 10 zombie-detection tests still pass unchanged.