github maziggy/bambuddy v0.2.4b2-daily.20260501
Daily Beta Build v0.2.4b2-daily.20260501

pre-release5 hours ago

Note

This is a daily beta build (2026-05-01). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Added

  • API keys can read Bambu Cloud presets on the owner's behalf (#1182, reported by turulix) — Tim is building a fully automated headless slicing pipeline against Bambuddy's API and hit the wall flagged in the previous round of cloud-auth work (#665): /cloud/* routes resolve cloud_token per-user from User.cloud_token, but the auth gate (require_permission_if_auth_enabled, auth.py:856) returned None for API-keyed requests, so the route fell back to the global Settings-table token, which only carries a value in auth-disabled deployments. Net effect on auth-enabled deployments: API keys reached the gate just fine, then /cloud/filaments always saw user=None, called get_stored_token(db, None) against an empty Settings table, and returned 401 / empty results — no path to read the slicer presets, filament catalogue, or device list that a CLI workflow needs. The data model treated API keys as standalone tokens with no owner (APIKey had id, name, key_hash, scope flags, and printer_ids — no user_id), so even if the gate wanted to delegate the cloud lookup, there was no User to delegate to. The fix: make API keys carry an owner, route /cloud/* lookups through that owner, and gate the new capability behind an explicit opt-in scope so existing automation doesn't gain cloud-read access on upgrade. Concretely: (1) APIKey gains user_id (FK to users.id, ON DELETE CASCADE — Postgres enforces, SQLite plus an explicit DELETE FROM api_keys WHERE user_id = ? in the user-delete route since SQLite ships FK enforcement off; the project's existing pattern at users.py:397-406 for created_by_id cleanup) and can_access_cloud (BOOLEAN DEFAULT 0 — opt-in, never set on legacy rows). (2) The auth gate now returns the owner User when it validates an API key with user_id set, so /cloud/* routes naturally resolve user.cloud_token the same way they do for JWT-authed sessions. Permission semantics are preserved — API keys still bypass the per-route permission check (their scopes live on the row itself), the User return is only so cloud-aware routes can read per-user state. Legacy ownerless keys (user_id IS NULL) keep returning None, stay anonymous, and continue working against every non-cloud route exactly as before. (3) A router-level dependency on the /cloud/* APIRouter enforces three independent fences for API-keyed callers: user_id IS NOT NULL (legacy keys → 401 with "recreate it from Settings → API Keys" — explicit recreate path rather than silently degrading), can_access_cloud=True (otherwise 403 with "Enable 'Allow cloud access' on the key"), and build_authenticated_cloud returning a service (otherwise 401 with the existing token-not-set error — unchanged for JWT flow). The router-level dep duplicates the API-key validation done by the regular auth gate (router-level deps run before route-level deps in FastAPI, so request.state isn't populated yet) — the cost is one extra SELECT FROM api_keys per cloud request, bounded and cheap with the key_prefix index. (4) The create route stamps user_id = current_user.id from the creator and rejects can_access_cloud=True when auth is disabled (no per-user cloud_token storage exists in that mode — fail loudly at create time rather than silently producing a non-functional key). PATCH route rejects flipping can_access_cloud to True on a legacy ownerless key for the same reason — force recreate. (5) APIKeyResponse exposes user_id so the UI can show ownership at a glance: a "Cloud" badge for cloud-enabled keys and a "Legacy" badge with hover tooltip ("Created before per-user ownership; recreate to use cloud access") for ownerless rows. The form gains an "Allow cloud access" checkbox, default off. Migration: two idempotent ALTER TABLE api_keys ADD COLUMN (user_id INTEGER REFERENCES users(id) ON DELETE CASCADE and can_access_cloud BOOLEAN DEFAULT 0) plus an index on user_id for the auth-gate's owner→keys lookup that runs on every API-keyed request. i18n: 5 new keys (settings.cloudAccess, settings.cloudAccessDescription, settings.cloudBadge, settings.legacyKey, settings.legacyKeyTooltip) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copies pending native translation (matches the project's existing flow for newly-added user-facing features). 9 backend integration tests in test_api_key_cloud_access.py: create stamps owner + cloud flag, defaults off when not asked for, rejected when auth disabled (no per-user storage), PATCH rejected on legacy keys; cloud router rejects legacy keys with the recreate copy, rejects owned-but-no-cloud-flag keys with the enable-cloud-access copy, lets owned-and-flagged keys through with owner's cloud_token in the response, JWT callers unaffected (gate is no-op for non-API-keyed); user-delete CASCADEs the API keys via the explicit DELETE in the route. 2 frontend SettingsPage tests pin the badge rendering matrix (Cloud badge present on can_access_cloud=true, Legacy badge present on user_id=null, neither rendered on a normal owned non-cloud key) and the create-form contract (toggling "Allow cloud access" results in can_access_cloud=true in the POST body). Permission semantics for the new fence are the only behavioural change for existing API keys: keys created before this release become "legacy" rows and are rejected at /cloud/* with the recreate message; every other endpoint they were used against — queue, status, control — is untouched.
  • Home Assistant addon detection — Settings → Updates and the in-app update banner now defer to the HA Supervisor (#1167, reported by Spegeli) — Bambuddy already shipped HA_URL/HA_TOKEN env-var support specifically labelled "for HA Add-on deployments" (#283) and a community-maintained HA addon (hobbypunk90/homeassistant-addon-bambuddy) exists upstream, so an HA-supervised installation is a real first-class deployment shape. Until now though, the update UI didn't know about it: HA addon users got the same "Update available!" banner as everyone else and, if they clicked through to Settings, saw the docker-compose snippet ("docker compose pull && docker compose up -d") which they cannot run from inside an HA addon container — that's the Supervisor's job. Detection uses the canonical signal: HA Supervisor injects SUPERVISOR_TOKEN into every addon container, and that variable is not set in any other environment. A new _is_ha_addon() helper in backend/app/api/routes/updates.py flips a request-level boolean which /updates/check surfaces as is_ha_addon: bool + an extended update_method: 'git' | 'docker' | 'ha_addon' enum. The check is checked before Docker on /updates/apply because HA addons are Docker containers — checking docker first would mis-classify them and serve the wrong message; the response also keeps is_docker: true alongside is_ha_addon: true so older frontend bundles still hit a managed-deployment branch (degrading to the Docker UX) instead of rendering an in-app Install button that can't work. Frontend branches identically: SettingsPage.tsx's update card checks is_ha_addon first and renders "Updates are managed by the Home Assistant Supervisor. Open Settings → Add-ons → Bambuddy in Home Assistant to install the new version." in place of the docker-compose hint; Layout.tsx's update banner is suppressed entirely for HA addons since the HA Supervisor's own update notification already surfaces the new version natively in the HA UI and a duplicate Bambuddy banner would just be noise that links to a page that says "go to HA". Plain Docker deployments are unaffected — the existing docker-compose hint and the in-app banner still render the same way they did. Localised across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) with full translations of the new settings.updateViaHomeAssistant string. 6 new tests pin the contract: 3 backend unit tests for _is_ha_addon() (env var present → true, absent → false, empty string treated as unset to guard against shells that export it empty), 1 backend integration test for the HA-precedes-Docker rejection on /updates/apply (asserts the message says "Home Assistant" and not "Docker Compose"), 2 backend integration tests for /updates/check covering the HA-addon branch (update_method == "ha_addon", both flags true) and the plain-Docker branch (is_ha_addon: false, update_method == "docker"); 2 frontend SettingsPage tests pin the mutually-exclusive UI rendering (HA branch shows the HA copy and not the docker-compose snippet; Docker branch shows the snippet and not the HA copy, neither shows the Install button); 2 frontend Layout tests pin the banner suppression for HA and its retention for plain Docker.
  • Filament Track Switch (FTS) support — print modal filament dropdown is no longer empty when an X2D / H2D has the FTS accessory installed (#1162, reported by mkavalecz) — When the FTS accessory is installed the printer's MQTT changes one nibble of the per-AMS info bitmask: bits 8-11 flip from a fixed extruder ID (0x0 / 0x1) to 0xE ("uninitialized"), because the AMS is no longer wired to a single nozzle — the FTS dynamically routes any slot to either extruder. Bambuddy's MQTT parser already skipped 0xE entries when building ams_extruder_map (matching BambuStudio's reading for boot-time transient state), so with the FTS installed the map ended up empty and the print modal's filament dropdown — which filters by extruderId === nozzle_id to prevent cross-nozzle assignment ("position of left hotend is abnormal" failures) — filtered out every loaded slot. Net effect: empty Filament Mapping dropdown on every dual-nozzle print with the FTS, even when the AMS was fully loaded with the right material. Detection comes from a new MQTT field — print.device.fila_switch — which is non-null only when the accessory is installed; it carries the routing topology as two arrays: in[track] = currently fed slot (-1 = empty) and out[track] = extruder this track terminates at. The fix surfaces this through a new FilaSwitchState dataclass on PrinterState (installed, in_slots, out_extruders, stat, info) and the equivalent FilaSwitchResponse Pydantic schema on the GET /printers/{id}/status route. Frontend (useFilamentMapping.ts + FilamentMapping.tsx) skips the per-extruder filter when printerStatus.fila_switch?.installed === true so any compatible AMS slot can satisfy any nozzle's filament requirement, since the FTS handles the routing. Slots currently fed into a track also get a routing badge in the dropdown — [L] or [R] — so the user can tell at a glance which slot the FTS is currently routing where (idle slots get no badge: they can be routed to either extruder on demand). The hard "no cross-nozzle assignment" filter on real dual-nozzle printers without the FTS stays untouched (still trips the same way it always has — fila_switch == null keeps the existing behaviour). 4 backend tests in test_bambu_mqtt.py::TestFilamentTrackSwitchDetection (default-not-installed, detect-from-MQTT-using-the-reporter's-bundle, no-fila_switch-field-stays-not-installed, missing-in-out-arrays-don't-crash) and 2 frontend tests in useFilamentMapping.test.ts (FTS-active drops the nozzle filter; explicit fila_switch: null keeps the filter applied). Upstream fila_switch payloads with anything other than the documented shape are tolerated — installed flips on the presence of the field, the routing arrays default to empty lists if missing, and the dropdown skips the badge for slots not currently in in_slots.

Fixed

  • Project cover photo thumbnail too small to recognise the print (#1155 follow-up, reported by smandon) — The 40×40 thumbnail smandon's MakerWorld download workflow relied on for "is this the model I'm looking for?" wasn't readable at that size; he asked for either a larger thumbnail or a click-to-enlarge full preview. Enlarging the thumbnail itself would shift the card layout and cost the dense grid he chose to use for browsing many projects, so the fix keeps the 40×40 thumbnail and shows a portal-mounted 384×384 popover on hover. The popover renders the full image in object-contain so tall portrait MakerWorld photos aren't cropped to a square, has pointer-events-none so it can't intercept hover and create a flicker loop, and z-[100] so it stacks above every sibling card in the grid. Why a portal: ProjectCard carries overflow-hidden (for its rounded-corner clipping and the color accent bar), so an in-tree popover gets clipped by the card the moment it extends past the card's bounds — exactly the cut-off behaviour smandon reported on the second iteration. Rendering via createPortal(..., document.body) escapes every ancestor clipping context, and position: fixed with measurements from getBoundingClientRect() keeps the popover pinned next to the thumbnail regardless of where the card sits in the grid. Edge handling: if the thumbnail is near the viewport's right edge the popover flips to the LEFT side of the thumbnail; vertical position is clamped so the popover never overflows the window top or bottom. The thumbnail's own onClick is stopPropagation'd so hovering the popover area never accidentally triggers the parent card's "open project" navigation. 2 new tests in ProjectsPage.test.tsx pin the contract: hovering mounts the popover at document.body level (not nested in the card — a future refactor that drops the portal would re-introduce the clipping bug, and the test catches that); leaving unmounts it; the popover img points at the same cover-image URL as the small thumbnail with object-contain; cards without a cover_image_filename never mount the portal-rendering component (so a hover doesn't flash an empty preview).
  • Spool edit form lost the Extra Colours value on reopen, Dual Color rendered identically to Gradient, and the Sparkle / checkerboard visuals were too subtle (#1154 follow-up, reported by maugsburger) — Four issues against the multi-colour swatch work that landed for #1154. (1) Extra Colours input didn't hydrate on edit reopen: ColorSection's draft buffer was seeded once via useState(formData.extra_colors), but SpoolFormModal opens before its own useEffect populates formData from the spool record — so by the time the saved value landed, the input's local state had already been initialised to '' and never re-synced. The COLOR preview banner above the input rendered correctly (consumes formData directly), making it obvious the data WAS persisted; only the input was stuck blank, which the user then had to retype to save anything else. Fix: a ref-guarded useEffect resyncs extraColorsDraft when formData.extra_colors changes via an external update (e.g. modal opening with a spool); the ref is updated inside commitExtraColors so the user's own typing is round-tripped without the resync clobbering it. (2) Dual Color and Gradient produced the same diagonal blend: buildColorLayer in filamentSwatchHelpers.ts ran the same linear-gradient(135deg, ...) for both effect types, so a "Dual Color" spool was visually indistinguishable from a "Gradient" one. Real dual-colour spools have two distinct bars on the reel — that's the whole point of the variant. Fix: when effect_type is dual-color or tri-color, build the colour layer as linear-gradient(to right, c1 0% X%, c2 X% Y%, ...) with CSS double-position stops (so the colour change is a hard line rather than a blend region) and equal-width segments across the stops; gradient keeps the original 135° smooth blend. The existing multicolor conic-gradient path is untouched. (3) Sparkle effect was almost invisible on card-sized swatches: the original 4-dot pattern (each ~1px) read fine on the small inline swatch but disappeared on the 60-pixel-tall inventory card banners — exactly where the user actually identifies a spool. Bumped to 13 flecks in mixed sizes (1px / 1.5px / 2px) and varying opacity (0.65 → 1.0) to give a depth-of-field "metal flake" feeling, distinct from solid + multi-colour. (4) Checkerboard cell density scaled with the swatch: the previous helper put repeating-conic-gradient(...) in the background-image and the caller applied background-size: cover, so the same 4-cell pattern was either tiny squares on a small swatch or four huge squares on a card-sized banner. Made buildFilamentBackground() return { backgroundImage, backgroundSize } with per-layer sizes — painted layers stay cover, the checkerboard gets a fixed 12px tile so the cell density stays consistent regardless of element size and clearly reads as a transparency indicator rather than a multi-colour stripe. Updated the three existing call sites (InventoryPage group banner + spool card, ColorSection preview) to spread the returned style object directly. 8 new frontend tests cover the four fixes: hard-split contract for Dual/Tri Color (3 tests + 1 regression guard that Dual ≠ Gradient for the same stops); Sparkle prominence (≥ 10 distinct radial-gradient layers in the rendered background); checkerboard density (last backgroundSize layer is a fixed pixel value, not cover); 4 hydration tests pinning the input restore path (fills when formData arrives via parent update, resyncs when the spool changes mid-form, doesn't clobber live user typing, clears when the new spool has no extra_colors).
  • Pending review card and the resulting archive name disagreed; .gcode.3mf filename suffix wasn't fully stripped (#1152 follow-up, reported by smandon) — Two distinct holes in the original #1152 fix surfaced when smandon retested on the daily build. (1) Suffix stripping was incomplete: Bambu Studio's "Send to printer" dialog typically writes files like Plate_1.gcode.3mf (a sliced gcode payload wrapped in a 3MF container), but the archive's display stem was computed via Path(name).stem, which only drops the last suffix and left the user staring at Plate_1.gcode in the archive UI. (2) The review card and the archive disagreed on what the print was called: the pending-uploads panel always rendered the raw FTP filename, while the eventual PrintArchive.print_name resolved from the 3MF's embedded title (or, with the toggle on filename, the filename stem). Net effect: the user saw Plate_1.gcode in the review card and Some Creator's Title in the archive grid for the same item, with no toggle that flipped both views in lockstep. Fix has three pieces: a new resolve_display_stem() helper in archive.py that strips .gcode.3mf / .3mf / .gcode (case-insensitive) so both the archive and the review-side normalisation produce the same canonical stem; a new PendingUpload.metadata_print_name column populated at FTP-receive time by peeking at the 3MF's embedded title (so /pending-uploads/ list calls don't have to reopen every 3MF on every render); and a new PendingUploadResponse.display_name computed field that mirrors archive_print's exact precedence — filename toggle: stripped stem; metadata toggle (default): cached title or stripped stem. Frontend's PendingUploadsPanel reads upload.display_name (with upload.filename as a defensive fallback for any pre-migration row), and the raw filename is exposed as a tooltip so users can still inspect what actually arrived over FTP. Migration is one idempotent ALTER TABLE pending_uploads ADD COLUMN metadata_print_name VARCHAR(255) (Postgres/SQLite-safe); existing pending rows have NULL there and gracefully fall back to filename-stem behaviour. 14 unit tests pin the stripping rules (Plate_1.gcode.3mfPlate_1, mixed case, dots in the middle, edge .3mf-only / .gcode-only, full-path inputs); 6 integration tests pin the response contract (default toggle uses metadata title when present, falls back to stripped stem when absent, filename toggle overrides metadata, filename toggle still strips the double suffix, GET /{id} exposes the same field, whitespace-only metadata behaves like absent); 3 frontend tests pin the review card's render path (resolved name shown, fallback to filename when display_name is empty, raw filename available via tooltip).
  • SpoolBuddy SSH update fails with "permission denied for user spoolbuddy" after Bambuddy keypair rotation (reported during user testing) — Bambuddy's data dir at <DATA_DIR>/spoolbuddy/ssh/ can get recreated outside the daemon's control (volume remount, container recreate, fresh deploy), at which point get_or_create_keypair() generates a new ed25519 keypair. The SpoolBuddy daemon previously only fetched and deployed Bambuddy's public key at registration time (/devices/register), so any rotation after a successful registration left the device's ~/.ssh/authorized_keys pointing at a defunct public half — every "Update" click from the Bambuddy UI then failed with Connection closed by authenticating user spoolbuddy [preauth] until the daemon was restarted manually. Worse, every prior successful registration appended a fresh entry to authorized_keys without ever pruning the old one, so a typical device accumulated 5+ stale Bambuddy-tagged keys (each one a permanent backdoor for whichever Bambuddy keypair held the matching private half at the time it was deployed). Two-pronged fix: (1) the heartbeat response (HeartbeatResponse, routes/spoolbuddy.py:282) now carries the current ssh_public_key alongside the existing pending_command / calibration fields, so the daemon's heartbeat picks up a key rotation within one cycle instead of needing a service restart; the same try/except Exception: pass pattern as the registration response keeps a missing/unreadable backend key from breaking telemetry. (2) _deploy_ssh_key() in daemon/main.py now syncs rather than appends — it strips every line tagged bambuddy-spoolbuddy, writes the current key once, and is a no-op when already in sync (so it doesn't churn the file every heartbeat). User-managed entries (any line not tagged bambuddy-spoolbuddy) are preserved untouched. 5 new unit tests in spoolbuddy/tests/test_deploy_ssh_key.py (creates-when-missing → mode-600 file with the current key; pile-up-of-stale-keys → only current key remains, no growth; preserves-unrelated-user-keys → user's own SSH access untouched; idempotent-when-in-sync → no mtime change so heartbeat doesn't churn the file; swallows-write-errors → readonly-fs PermissionError doesn't crash the heartbeat loop). 2 new backend integration tests in test_spoolbuddy.py::TestDeviceEndpointstest_heartbeat_returns_ssh_public_key (response carries the key on every heartbeat) and test_heartbeat_ssh_key_failure_does_not_break_heartbeat (backend key-read failure leaves ssh_public_key: None but the heartbeat still 200s).
  • External-camera frames returned as black on go2rtc and other MJPEG sources (#1177, reported by nkm8) — _capture_mjpeg_frame returned the very first JPEG it found in the stream's bytes (backend/app/services/external_camera.py:282), but many MJPEG sources — go2rtc most notably, and several IP cameras — emit a "warm-up" frame on the byte that follows connection accept: usually the last keyframe held in the encoder, which is often black or stale until the encoder catches up to live content. Subsequent frames on the same connection are fine. The reporter saw it across snapshot UX, finish photos in notifications, and timelapse — every code path that opens a fresh capture connection (snapshot endpoint, [PHOTO-BG] finish photo, plate-detection CV, Obico ML inference, layer timelapse, Settings → Test). His own observation that go2rtc's /api/frame.jpeg (single-frame, internally already warmed) is never black while the first frame off /api/stream.mjpeg is, matched the hypothesis exactly. Support-bundle evidence was clean: every black notification frame in his log was 11095 bytes (a pure-black 1280×720 JPEG encodes to ~10–15 KB on standard libjpeg quality settings), while every captured-after-warm-up frame from the same source was 30–45 KB. Fix: read past the first frame and return the second; if the connection closes / times out / hits the 5 MB buffer cap before a second frame ever arrives, fall back to the first so callers still get something (degrading slow / single-frame streams to None would regress every code path that relied on pre-fix behaviour). The inner-loop now drains every complete frame already in the buffer before pulling the next chunk so high-FPS sources that pack multiple frames per chunk are handled correctly. The snapshot / rtsp / usb capture paths and the live-view streaming endpoint (generate_mjpeg_stream) are untouched. 7 new regression tests in test_external_camera.py::TestCaptureMjpegFrameWarmupSkip cover (a) two-frames-in-two-chunks → second returned, (b) two-frames-in-one-chunk → second returned, (c) frame split across chunk boundary → assembled correctly, (d) single-frame stream → first returned via fallback (no None regression), (e) timeout after first frame → first returned via fallback, (f) zero-frame stream → None, (g) non-200 status → None. Latency penalty: at most one frame interval (typically 50 ms – 1 s on a steady stream).
  • MakerWorld sidebar entry visible to every user regardless of group permissions (#1175) — Backend already enforced makerworld:view on every /makerworld/* route (backend/app/api/routes/makerworld.py:145, 157, 242, 406), the permission was correctly granted to the admin and standard-user role defaults (permissions.py:298, 364, 454), and the frontend Permission type union already included 'makerworld:view' | 'makerworld:import' (client.ts:2498) — but the sidebar's hand-maintained navPermissions map in Layout.tsx:278 had no entry for makerworld, so isHidden('makerworld') always returned false and the entry rendered for every authenticated user. Users without the permission saw the entry, clicked, and the page rendered while every API call inside it 403'd. Two-line fix: (1) Layout.tsx:278 — add makerworld: 'makerworld:view' to the map, matching every other sidebar entry's gating shape; (2) App.tsx:200 — wrap the route in <PermissionRoute permission="makerworld:view"> for defence in depth, so a user who knows the URL can no longer reach the page directly (matches the existing pattern on settings, groups/new, groups/:id/edit two lines below). 2 new Layout tests pin the contract: with auth enabled and a user lacking makerworld:view, the sidebar <a href="/makerworld"> link is absent (other links like /files still render); with the permission granted, the link renders.
  • Printer Info modal: serial-number and IP-address copy buttons silently did nothing on plain-HTTP LAN deployments (#1174, reported by BurntOutHylian) — PrinterInfoModal's CopyButton only tried navigator.clipboard.writeText(), which is gated by the secure-context requirement (HTTPS or localhost). On the typical Bambuddy deployment shape — bare-IP HTTP on the LAN — navigator.clipboard is undefined; the existing try/catch swallowed the resulting TypeError, the icon never flipped to the tick, and nothing landed on the user's clipboard. Fixed by adding the same off-screen-textarea + document.execCommand('copy') fallback that CameraTokensPage's plaintext-token modal already uses for plain-HTTP LAN deployments: gate on navigator.clipboard && window.isSecureContext, fall back to the legacy path otherwise, and surface the success-tick only when the copy actually landed (return early without flipping copied if execCommand('copy') returns false). The try/finally around the textarea guarantees DOM cleanup even when the browser throws on a restricted context. 3 new component tests in PrinterInfoModal.test.tsx cover (a) secure-context happy path uses navigator.clipboard.writeText, (b) plain-HTTP fallback path actually invokes execCommand('copy') and leaves no leaked textarea in the DOM, (c) finally cleanup removes the textarea even when execCommand throws synthetically. Thanks to BurntOutHylian for the precise file/line pointer in the report.
  • Queue auto-dispatched the next print onto a fouled bed after an aborted or cancelled print (#1171, reported by tom5677) — When a print ended with status aborted (printer self-abort, or a user stopping the print on the printer's own touchscreen) or cancelled (user stopping the print via the Bambuddy queue UI), the plate-clear gate added in #961 was not raised — only completed and failed triggered it (backend/app/main.py:2660). Result: the queue scheduler dispatched the next pending item ~2 seconds after the abort, with the previous print's material still on the bed. The reporter saw two prints (P1P + P1S) auto-start onto fouled beds within seconds of each other after touchscreen-aborts, and explicitly flagged the risk of damage to the printer; a third printer (his second P1S) behaved correctly because its previous print had ended completed. The original code's comment ("user-cancelled prints don't require a plate-clear ack — nothing printed on the bed") only holds if you cancel right at layer 1; cancelling a 12-hour print at hour 11 leaves a fouled bed too. Fix: the gate is now raised for every terminal status — completed, failed, aborted, cancelled — matching the safety contract that the user must acknowledge the bed is clear before any next queued print starts. The gate is user-clearable on the Printers page, so worst case for a layer-1 cancel the user clicks "Clear Plate" once. Touchscreen-aborts are particularly important to gate because Bambuddy's "user stopped via UI" override (_user_stopped_printersaborted mapped to cancelled) only fires when the user stops via the Bambuddy queue; a touchscreen-stop reports aborted straight through. Regression coverage in test_print_lifecycle.py::TestPlateClearGate: parametrised across all four terminal statuses (asserts set_awaiting_plate_clear(printer_id, True) is called for each), plus a defence-in-depth test that an unrecognised future status string never silently raises the gate.
  • Printer card always shows the first plate's thumbnail when printing a multi-plate 3MF (#1166, reported by smandon) — On printers running firmware that drops the plate path from print.gcode_file (the reporter's case: P1S 01.10.00.00, but the same shape appears on other firmware revisions), the printer reports gcode_file: MyModel.3mf instead of gcode_file: /Metadata/plate_4.gcode. The /printers/{id}/cover route's regex (plate_(\d+)\.gcode) found nothing in the bare .3mf filename, defaulted to plate 1, and the printer card showed Metadata/plate_1.png from the 3MF — even though the user dispatched plate 4. Same problem hit current_plate_id on the status response (printer card detail row showed plate 1). Two-pronged fix on a precedence ladder: (1) Bambuddy now records the plate it dispatchedstart_print() writes (dispatched_plate_id, dispatched_subtask) onto PrinterState at publish time, and a new resolve_plate_id(state) helper prefers that record over the gcode_file regex when dispatched_subtask == state.subtask_name (the subtask check rejects stale entries from a prior Bambuddy-dispatched print bleeding into a Studio-direct dispatch). (2) After the 3MF lands on disk, the cover route scans the zip for a unique Metadata/plate_*.gcode entry: per-plate archives sliced separately in Bambu Studio bundle thumbnails for every plate but only the active plate's gcode, so a single match unambiguously identifies the plate even when no Bambuddy dispatch exists (Studio-direct flow). Final fallback is plate 1, unchanged. The cover-byte cache key was also simplified — plate_num was removed from the key now that resolution is late-bound; clear_cover_cache() already runs on every print start, so different plates of the same project always re-fetch a fresh thumbnail. Coverage: 5 unit tests in test_printer_manager.py::TestResolvePlateId (dispatch precedence, stale-subtask guard, gcode regex fallback, default-1 path, missing-subtask guard), 4 unit tests in test_bambu_mqtt.py::TestStartPrintRecordsDispatchedPlate (dispatch record set/cleared/overwritten/skipped on disconnect), 2 integration tests in test_printers_api.py (dispatch wins over plate-1 default; 3MF-scan fallback for per-plate archive without dispatch). Studio-direct multi-plate prints (no dispatch record AND multiple plate gcodes in the 3MF) still default to plate 1 — matches the firmware's own ambiguity, not regressed by this change.
  • AMS slot configuration intermittently fails to reach the printer after several configs in a row (#1164, reported by RosdasHH) — Configuring AMS slots a handful of times (the reporter saw it almost every 6th change) would silently stop reaching the printer; ~1 minute later the filament colours on the printer would briefly jump between slots, then settle. Root cause was the zombie-session watchdog at bambu_mqtt.py:861 introduced for #887. When an ams_filament_setting response took >10 s (normal under load — concurrent K-profile fetches, busy printer, network jitter) the watchdog incremented an _ams_cmd_unanswered counter and zeroed _last_ams_cmd_time so it wouldn't re-trigger on the next status push. The bug: the response handler that reset the counter was guarded by and self._last_ams_cmd_time > 0 — so when the late response did arrive (after the watchdog had already zeroed the timer), the counter stayed armed at 1. The next slow response on any ams_filament_setting command — possibly minutes or hours later, on an entirely unrelated config attempt — would take the counter to 2 and trigger force_reconnect_stale_session(). The user-visible symptoms match exactly: configs stop landing (because MQTT reconnects mid-publish, dropping the in-flight command and surfacing as Cannot set AMS filament setting: not connected if the user retries during the ~1 min reconnect window), then the queued state finally lands when the reconnect completes (the "filament colours jumping around" the reporter described). Fix is to drop the _last_ams_cmd_time > 0 guard: any ams_filament_setting response — late or not — proves the channel is alive, so the counter must reset. Watchdog still trips on a real zombie session (no responses at all for two consecutive >10 s windows). Regression test in test_bambu_mqtt.py::TestZombieSessionDetection::test_late_response_after_watchdog_clears_counter_issue_1164 simulates the exact sequence (watchdog fires → late response arrives → second slow response on a fresh command) and asserts the counter resets to 0 on the late response and the second command doesn't tip the threshold to 2. Other 10 zombie-detection tests still pass unchanged.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.