github maziggy/bambuddy v0.2.4b2-daily.20260504
Daily Beta Build v0.2.4b2-daily.20260504

pre-release5 hours ago

Note

This is a daily beta build (2026-05-04). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Changed

  • Virtual Printer Tailscale toggle no longer provisions Let's Encrypt certs — it's now informational — The original promise of the tailscale_disabled toggle was that flipping it on would obtain an LE cert via tailscale cert so users wouldn't need to import Bambuddy's CA into the slicer. End-to-end testing exposed that this was always going to fail: BambuStudio and OrcaSlicer both refuse hostname input in the Add Printer dialog (IP-only), and — more fundamentally — their printer-MQTT trust path validates only against the bundled BBL CA store (printer.cer), not the system trust store. Confirmed against ClusterM/open-bambu-networking's clean-room reimplementation: mosquitto_tls_set(BBL_CA) + mosquitto_tls_opts_set(verify_peer=1) + mosquitto_tls_insecure_set(true) — chain validation against BBL CA only, hostname check intentionally skipped (because Bambu's printer cert CN is the device serial, not an IP/hostname). LE-issued certs don't chain to BBL CA, so the slicer rejects with the well-known "-1" before any hostname/IP logic runs. The cert-import step is unavoidable; the LE provisioning was dead code for slicer connections. What stays: the toggle, the /virtual-printers/tailscale-status route, the docker socket mount, and the host-level Tailscale information surfaced on the VP card (IP + MagicDNS hostname + copy button) so users know what to paste into the slicer when they pick the Tailscale interface from the bind_ip dropdown. Tailscale's role is now strictly network reach — private WireGuard tunnel to the VP from any tailnet device, no port forwarding — exactly the same trust burden as LAN. What goes: provision_cert / ensure_cert / cert_needs_renewal and the daily renewal task / restart-on-renewal plumbing on the manager (_cert_renewal_task, _cert_restart_task, _cert_renewal_loop, _restart_for_cert_renewal, _cancel_renewal_task, _cancel_restart_task); the tailscale_fqdn field surfaced via VP status (cert side-effect); the tailscale_not_available 409 guard on toggle-enable in both routes/virtual_printers.py and routes/settings.py (toggle is informational, daemon presence doesn't block flipping it); CertificateService.{ts_cert_path, ts_key_path, use_tailscale_cert} and the LE cert files on disk (virtual_printer_ts.{crt,key} left in place per-VP — harmless residue, can be deleted manually). The tailscale_disabled DB column is kept as the persisted toggle state. Tailscale FQDN/IP on the VP card is now sourced from the existing /tailscale/status endpoint (host-level) rather than from per-VP cert provisioning side-effect — the data is the same regardless of which VP you're looking at, since each host has one Tailscale identity. Wiki, README, and i18n copy updated across all 8 locales to drop the "no cert import needed" framing — toggle's helper text now says it surfaces the Tailscale address and that CA import is unchanged. Tests: test_tailscale.py reduced to the surviving get_status cases (binary missing, command fails, success, empty DNSName, malformed JSON); test_virtual_printer.py::test_sync_from_db_restarts_on_tailscale_disabled_change rewritten as test_sync_from_db_does_not_restart_on_tailscale_toggle (toggle is informational — remove_instance must NOT be called when only tailscale_disabled changes); test_virtual_printer_api.py::TestVirtualPrinterTailscaleGuardAPI collapsed to a single TestVirtualPrinterTailscaleToggleAPI::test_toggle_does_not_consult_tailscale_daemon that asserts both directions succeed and get_status is never called. Frontend VirtualPrinterCard.test.tsx mock now stubs getTailscaleStatus and the FQDN-copy block drives the FQDN through that query rather than VP status.

Added

  • Virtual Printer non-proxy modes now mirror the live target printer to the slicer (#1193 follow-up) — Until now, Immediate / Review / Print Queue VPs looked like a stub Bambu Lab printer to the slicer: AMS dropdowns were empty, no live state, no camera, no per-filament k-profile lookup. The user could send a sliced file and that was it. With this change, the VP fans out the target printer's live MQTT state to the slicer (AMS units, FTS / dual-extruder routing, nozzle, temps, k-profiles, AMS load / dry / calibration commands) and proxies the camera RTSPS stream on port 322 — so the slicer treats the VP as a fully-functional Bambu printer while Bambuddy's queue / archive / dispatch features stay in the loop. Architecture (cached-as-base, single source of truth): the bridge caches the latest real push_status and info.get_version response from Bambuddy's existing per-printer MQTT subscription (no second session on the printer — firmware in-flight budget unaffected, see #1164). The VP's _send_status_report returns a near-byte-identical copy of the real push with only the upload-state-machine fields (sequence_id, command, msg, gcode_state, gcode_file, prepare_percent, subtask_name) overridden under our control, so BambuStudio's Send pre-flight sees exactly the same shape as a direct-to-printer connection. Command responses (extrusion_cali_get, AMS write acks, xcam responses) are fanned out raw — they carry sequence_ids the slicer is waiting on. Slicer-issued commands forward to the real printer except print.project_file / gcode_file, which are still answered locally because the file lives on Bambuddy. Field-shape gotchas worth remembering: (1) Real Bambu printers wire-format push_status JSON with indent=4 (32 254 bytes for an idle H2D push, vs 14 268 bytes compact) — BambuStudio's Send pre-flight rejects compact JSON silently, so _publish_to_report was switched to json.dumps(payload, indent=4). (2) net.info[*].ip (little-endian uint32, e.g. 192.168.255.133 → 2248124608) is the FTP destination IP BambuStudio uses for "Send to Printer storage" — it overrides anything else, including the URL hosts the rest of MQTT advertises. The bridge rewrites this to the VP's bind IP on cache, otherwise the slicer FTPs straight to the real printer and bypasses Bambuddy entirely (symptom: "Failed to send" with zero inbound FTP connections on the VP — debug-by-tcpdump if anyone hits it again). (3) upgrade_state.sn and any other nested-dict sn matching the target serial are rewritten to the VP serial; AMS-hardware serials (n3f/0.sn etc.) are left alone — those identify physical AMS units, not the device. (4) ipcam.rtsp_url is left unchanged: BambuStudio overrides the URL host with the device IP it bound on (the VP), so the slicer hits the VP's :322 RTSPS port — not the printer's directly. (5) For the slicer's RTSPS to reach the printer, the VP gets a raw TCPProxy on <bind_ip>:322 → <printer_ip>:322 (same approach proxy mode uses; cap_net_bind_service was already in the systemd unit for FTP :990). (6) extrusion_cali_get is forwarded — answering it locally hides the user's stored k-profiles. Setup nuance for camera: because the slicer authenticates against the printer's RTSPS with whatever access code is in its profile, the VP's access code must match the target printer's access code for the camera path to authenticate. This is a one-time configuration step (Settings → Virtual Printer → set access code = target printer's LAN code, then re-add the VP in Bambu Studio / Orca Slicer). MQTT and FTP work either way; only camera needs the match because RTSPS auth happens between the slicer and the real printer's broker. Tested e2e with both BambuStudio and OrcaSlicer against H2D (dual-nozzle, AMS 2 Pro + AMS HT) and X1C (single-nozzle, AMS) across all three non-proxy modes (Immediate / Review / Print Queue) — sync, send, k-profile lookup, AMS configuration from slicer, and live camera all work. Files: new backend/app/services/virtual_printer/mqtt_bridge.py (caches push_status / get_version, forwards slicer commands, fans out command responses, rewrites identity fields including net.info[*].ip LE uint32); bambu_mqtt.py gains register_raw_message_handler / unregister_raw_message_handler / publish_raw so the bridge can subscribe to Bambuddy's existing per-printer paho subscription without opening a second session; mqtt_server.py switches _send_status_report and _send_version_response to cached-as-base when the bridge has data, falls back to the original synthetic stubs otherwise; manager.py wires the bridge + a raw TCPProxy for RTSPS into start_server for non-proxy modes whenever a target printer is configured. 25 new tests in test_vp_mqtt_bridge.py pin the contract: lifecycle, push_status caching, serial / IP rewriting, get_version-modules cache, selective fan-out (only command responses, never push_status itself), wire format must use indent=4, routing of slicer-issued commands (project_file / gcode_file local; everything else forwarded), and the IP-encoding helper against captures from real H2D pushes. Proxy mode is untouched — SlicerProxyManager still owns its own MQTT/FTP/RTSP/Bind/Aux proxies in proxy mode and never instantiates SimpleMQTTServer or MQTTBridge.
  • AMS slot Load / Unload from the printer card (#891, reported by NNeerr00, +1 from cadtoolbox) — The MQTT primitives for "load filament from a tray" and "unload the currently loaded tray" already existed in bambu_mqtt.py (reverse-engineered from BambuStudio captures, including the H2D dual-extruder right-external case captured fresh during this work) but were unused — there was no HTTP route and no UI. Net effect: every Load / Unload had to happen on the printer touchscreen, and external-spool users on dual-nozzle H2D had no way to drive Ext-R from the desktop at all. Backend: new POST /printers/{id}/ams/load?tray_id={int} and POST /printers/{id}/ams/unload, both gated on Permission.PRINTERS_CONTROL. The load route validates tray_id ∈ {0..15, 254, 255} (AMS slots, single-external/Ext-L, Ext-R respectively) and returns a human-readable target in the success message ("AMS 0 slot 1", "external spool", "Ext-R") so the UI toast tells the user which spool the printer is now feeding from. MQTT primitive update: ams_load_filament gains a third encoding branch for tray_id=255 matching the BambuStudio capture verbatim — ams_id=255, slot_id=0 (the right-extruder index, not a slot index — Bambu's load command on dual-extruder externals encodes the destination extruder, not the source slot), target=255, and curr_temp = tar_temp = right-nozzle temp (read from state.temperatures["nozzle_2"], falling back to 215 °C if the right nozzle is cold or unknown — the printer rejects nonsensical temps, so a warm fallback is safer than -1). The existing tray_id=254 branch is preserved verbatim (slot_id=254, curr/tar=-1) since that came from a single-extruder capture and is known to work; no risk of regression on existing single-external setups. UI: the existing AMS slot popover (the one with "Re-read RFID") gains two new entries — "Load" (posts tray_id = ams.id * 4 + slotIdx) and "Unload" (no params, global on the currently-loaded slot). The external spool slot — which had no popover at all before — gets one with the same Load + Unload entries, and on dual-nozzle H2D each external slot (Ext-L tray_id=254, Ext-R tray_id=255) drives its own extruder. The menu is hidden while state === 'RUNNING' (parallels the existing RFID re-read gating). i18n: printers.ams.load, printers.ams.unload, plus four new toast strings (loadInitiated, unloadInitiated, failedToLoad, failedToUnload) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 16 new tests pin the contract: 5 unit tests in test_bambu_mqtt.py::TestAmsLoadFilamentEncoding (AMS slot encoding, Ext-L preserves legacy capture, Ext-R uses the new captured shape with actual right-nozzle temp, Ext-R falls back to 215 °C when cold, disconnected client doesn't publish); 11 integration tests in test_printers_api.py::TestAMSLoadUnloadAPI (load: invalid tray_id 400, not-found 404, not-connected 400, AMS slot success with derived ams_id*4+slot math, Ext-L success, Ext-R success, MQTT failure 500; unload: not-found, not-connected, success, MQTT failure 500); 4 frontend tests in PrintersPageAmsLoadUnload.test.tsx (Load posts the right tray_id, Unload posts with no params, menu hidden while RUNNING, external spool's tray_id=254 round-trips through the route).
  • API keys can read Bambu Cloud presets on the owner's behalf (#1182, reported by turulix) — Tim is building a fully automated headless slicing pipeline against Bambuddy's API and hit the wall flagged in the previous round of cloud-auth work (#665): /cloud/* routes resolve cloud_token per-user from User.cloud_token, but the auth gate (require_permission_if_auth_enabled, auth.py:856) returned None for API-keyed requests, so the route fell back to the global Settings-table token, which only carries a value in auth-disabled deployments. Net effect on auth-enabled deployments: API keys reached the gate just fine, then /cloud/filaments always saw user=None, called get_stored_token(db, None) against an empty Settings table, and returned 401 / empty results — no path to read the slicer presets, filament catalogue, or device list that a CLI workflow needs. The data model treated API keys as standalone tokens with no owner (APIKey had id, name, key_hash, scope flags, and printer_ids — no user_id), so even if the gate wanted to delegate the cloud lookup, there was no User to delegate to. The fix: make API keys carry an owner, route /cloud/* lookups through that owner, and gate the new capability behind an explicit opt-in scope so existing automation doesn't gain cloud-read access on upgrade. Concretely: (1) APIKey gains user_id (FK to users.id, ON DELETE CASCADE — Postgres enforces, SQLite plus an explicit DELETE FROM api_keys WHERE user_id = ? in the user-delete route since SQLite ships FK enforcement off; the project's existing pattern at users.py:397-406 for created_by_id cleanup) and can_access_cloud (BOOLEAN DEFAULT 0 — opt-in, never set on legacy rows). (2) The auth gate now returns the owner User when it validates an API key with user_id set, so /cloud/* routes naturally resolve user.cloud_token the same way they do for JWT-authed sessions. Permission semantics are preserved — API keys still bypass the per-route permission check (their scopes live on the row itself), the User return is only so cloud-aware routes can read per-user state. Legacy ownerless keys (user_id IS NULL) keep returning None, stay anonymous, and continue working against every non-cloud route exactly as before. (3) A router-level dependency on the /cloud/* APIRouter enforces three independent fences for API-keyed callers: user_id IS NOT NULL (legacy keys → 401 with "recreate it from Settings → API Keys" — explicit recreate path rather than silently degrading), can_access_cloud=True (otherwise 403 with "Enable 'Allow cloud access' on the key"), and build_authenticated_cloud returning a service (otherwise 401 with the existing token-not-set error — unchanged for JWT flow). The router-level dep duplicates the API-key validation done by the regular auth gate (router-level deps run before route-level deps in FastAPI, so request.state isn't populated yet) — the cost is one extra SELECT FROM api_keys per cloud request, bounded and cheap with the key_prefix index. (4) The create route stamps user_id = current_user.id from the creator and rejects can_access_cloud=True when auth is disabled (no per-user cloud_token storage exists in that mode — fail loudly at create time rather than silently producing a non-functional key). PATCH route rejects flipping can_access_cloud to True on a legacy ownerless key for the same reason — force recreate. (5) APIKeyResponse exposes user_id so the UI can show ownership at a glance: a "Cloud" badge for cloud-enabled keys and a "Legacy" badge with hover tooltip ("Created before per-user ownership; recreate to use cloud access") for ownerless rows. The form gains an "Allow cloud access" checkbox, default off. Migration: two idempotent ALTER TABLE api_keys ADD COLUMN (user_id INTEGER REFERENCES users(id) ON DELETE CASCADE and can_access_cloud BOOLEAN DEFAULT 0) plus an index on user_id for the auth-gate's owner→keys lookup that runs on every API-keyed request. i18n: 5 new keys (settings.cloudAccess, settings.cloudAccessDescription, settings.cloudBadge, settings.legacyKey, settings.legacyKeyTooltip) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copies pending native translation (matches the project's existing flow for newly-added user-facing features). 9 backend integration tests in test_api_key_cloud_access.py: create stamps owner + cloud flag, defaults off when not asked for, rejected when auth disabled (no per-user storage), PATCH rejected on legacy keys; cloud router rejects legacy keys with the recreate copy, rejects owned-but-no-cloud-flag keys with the enable-cloud-access copy, lets owned-and-flagged keys through with owner's cloud_token in the response, JWT callers unaffected (gate is no-op for non-API-keyed); user-delete CASCADEs the API keys via the explicit DELETE in the route. 2 frontend SettingsPage tests pin the badge rendering matrix (Cloud badge present on can_access_cloud=true, Legacy badge present on user_id=null, neither rendered on a normal owned non-cloud key) and the create-form contract (toggling "Allow cloud access" results in can_access_cloud=true in the POST body). Permission semantics for the new fence are the only behavioural change for existing API keys: keys created before this release become "legacy" rows and are rejected at /cloud/* with the recreate message; every other endpoint they were used against — queue, status, control — is untouched.
  • Home Assistant addon detection — Settings → Updates and the in-app update banner now defer to the HA Supervisor (#1167, reported by Spegeli) — Bambuddy already shipped HA_URL/HA_TOKEN env-var support specifically labelled "for HA Add-on deployments" (#283) and a community-maintained HA addon (hobbypunk90/homeassistant-addon-bambuddy) exists upstream, so an HA-supervised installation is a real first-class deployment shape. Until now though, the update UI didn't know about it: HA addon users got the same "Update available!" banner as everyone else and, if they clicked through to Settings, saw the docker-compose snippet ("docker compose pull && docker compose up -d") which they cannot run from inside an HA addon container — that's the Supervisor's job. Detection uses the canonical signal: HA Supervisor injects SUPERVISOR_TOKEN into every addon container, and that variable is not set in any other environment. A new _is_ha_addon() helper in backend/app/api/routes/updates.py flips a request-level boolean which /updates/check surfaces as is_ha_addon: bool + an extended update_method: 'git' | 'docker' | 'ha_addon' enum. The check is checked before Docker on /updates/apply because HA addons are Docker containers — checking docker first would mis-classify them and serve the wrong message; the response also keeps is_docker: true alongside is_ha_addon: true so older frontend bundles still hit a managed-deployment branch (degrading to the Docker UX) instead of rendering an in-app Install button that can't work. Frontend branches identically: SettingsPage.tsx's update card checks is_ha_addon first and renders "Updates are managed by the Home Assistant Supervisor. Open Settings → Add-ons → Bambuddy in Home Assistant to install the new version." in place of the docker-compose hint; Layout.tsx's update banner is suppressed entirely for HA addons since the HA Supervisor's own update notification already surfaces the new version natively in the HA UI and a duplicate Bambuddy banner would just be noise that links to a page that says "go to HA". Plain Docker deployments are unaffected — the existing docker-compose hint and the in-app banner still render the same way they did. Localised across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) with full translations of the new settings.updateViaHomeAssistant string. 6 new tests pin the contract: 3 backend unit tests for _is_ha_addon() (env var present → true, absent → false, empty string treated as unset to guard against shells that export it empty), 1 backend integration test for the HA-precedes-Docker rejection on /updates/apply (asserts the message says "Home Assistant" and not "Docker Compose"), 2 backend integration tests for /updates/check covering the HA-addon branch (update_method == "ha_addon", both flags true) and the plain-Docker branch (is_ha_addon: false, update_method == "docker"); 2 frontend SettingsPage tests pin the mutually-exclusive UI rendering (HA branch shows the HA copy and not the docker-compose snippet; Docker branch shows the snippet and not the HA copy, neither shows the Install button); 2 frontend Layout tests pin the banner suppression for HA and its retention for plain Docker.
  • OIDC auto-created users now get readable usernames and land in a configurable group (#1173) — Two improvements to the OIDC auto-create flow: (1) Username derivation: Bambuddy now derives the username from preferred_username, then name, before falling back to the opaque provider_sub[:30]. Each candidate is sanitized independently — alphanumeric plus ./-/_, whitespace collapsed, deduplication suffix appended on collision — so a value that strips to empty (e.g. "!!!") correctly falls through to the next option rather than silently producing "oidcuser". (2) Default group: each OIDC provider gains a default_group_id field. When set, auto-created users are placed in that group; when unset, the existing "Viewers" fallback is preserved, so behaviour is unchanged for existing deployments. The column is nullable with ON DELETE SET NULL; SQLite does not enforce FK constraints here, so a deleted configured group falls through to Viewers at runtime. default_group_id is validated on create/update (422 on a non-existent group). Exposed in the OIDC settings form as a group dropdown. Limitation: to clear a configured default group, delete the group or select a different one — explicit reset-to-null is not currently supported.
  • Filament Track Switch (FTS) support — print modal filament dropdown is no longer empty when an X2D / H2D has the FTS accessory installed (#1162, reported by mkavalecz) — When the FTS accessory is installed the printer's MQTT changes one nibble of the per-AMS info bitmask: bits 8-11 flip from a fixed extruder ID (0x0 / 0x1) to 0xE ("uninitialized"), because the AMS is no longer wired to a single nozzle — the FTS dynamically routes any slot to either extruder. Bambuddy's MQTT parser already skipped 0xE entries when building ams_extruder_map (matching BambuStudio's reading for boot-time transient state), so with the FTS installed the map ended up empty and the print modal's filament dropdown — which filters by extruderId === nozzle_id to prevent cross-nozzle assignment ("position of left hotend is abnormal" failures) — filtered out every loaded slot. Net effect: empty Filament Mapping dropdown on every dual-nozzle print with the FTS, even when the AMS was fully loaded with the right material. Detection comes from a new MQTT field — print.device.fila_switch — which is non-null only when the accessory is installed; it carries the routing topology as two arrays: in[track] = currently fed slot (-1 = empty) and out[track] = extruder this track terminates at. The fix surfaces this through a new FilaSwitchState dataclass on PrinterState (installed, in_slots, out_extruders, stat, info) and the equivalent FilaSwitchResponse Pydantic schema on the GET /printers/{id}/status route. Frontend (useFilamentMapping.ts + FilamentMapping.tsx) skips the per-extruder filter when printerStatus.fila_switch?.installed === true so any compatible AMS slot can satisfy any nozzle's filament requirement, since the FTS handles the routing. Slots currently fed into a track also get a routing badge in the dropdown — [L] or [R] — so the user can tell at a glance which slot the FTS is currently routing where (idle slots get no badge: they can be routed to either extruder on demand). The hard "no cross-nozzle assignment" filter on real dual-nozzle printers without the FTS stays untouched (still trips the same way it always has — fila_switch == null keeps the existing behaviour). 4 backend tests in test_bambu_mqtt.py::TestFilamentTrackSwitchDetection (default-not-installed, detect-from-MQTT-using-the-reporter's-bundle, no-fila_switch-field-stays-not-installed, missing-in-out-arrays-don't-crash) and 2 frontend tests in useFilamentMapping.test.ts (FTS-active drops the nozzle filter; explicit fila_switch: null keeps the filter applied). Upstream fila_switch payloads with anything other than the documented shape are tolerated — installed flips on the presence of the field, the routing arrays default to empty lists if missing, and the dropdown skips the badge for slots not currently in in_slots.

Fixed

  • MakerWorld P2S 3MFs failed to slice with "Param values in 3mf/config error: -1 not in range" (#1201, reported by inorichi) — Slicing any MakerWorld model sliced for the P2S (e.g. https://makerworld.com/en/models/1958872) bombed with Slicer process failed (exit code 238) and stderr listing raft_first_layer_expansion: -1 not in range [0.0, 3.4e+38] and tree_support_wall_count: -1 not in range [0.0, 2.0]. Root cause: BambuStudio writes "-1" into Metadata/project_settings.config for fields the user wants inherited from the parent process preset — the GUI handles this internally, but the headless CLI (orca-slicer-api / bambu-studio-api sidecar) runs StaticPrintConfig's range validator against the embedded settings before the --load-settings overrides apply, so the sentinel "-1" trips the field's lower-bound check and the CLI exits non-zero before our profile triplet is ever consulted. The slice_with_profiles path failed; the fallback to slice_without_profiles (which uses embedded settings only) also failed because it reads the same project_settings.config and the same validator runs there too. Earlier in the codebase there's a _strip_3mf_embedded_settings function that tried to dodge this by removing the entire project_settings.config (plus model_settings.config, slice_info.config, cut_information.xml); that experiment was reverted because the strip broke StaticPrintConfig initialisation — silent exit-0, no result.json, no stderr, masked by the fallback retry which then produced wrong-printer output without telling anyone (the cautionary comment in library.py:_run_slicer_with_fallback records the lesson). Fix is surgical: new _sanitize_project_settings_sentinels(zip_bytes) opens the embedded config, removes only allowlisted keys when their value is exactly "-1", and re-zips. Allowlist (_PROJECT_SETTINGS_SENTINEL_KEYS) starts with the two from this report (raft_first_layer_expansion, tree_support_wall_count) plus prime_tower_brim_width (a known sentinel cited in the strip-experiment comment block from earlier reports). Other fields — including non-allowlisted keys that happen to hold "-1" (e.g. z_offset set to -1 deliberately by a user) — are left untouched, so a blanket "-1 strip" can't silently corrupt legitimate negative values. The sanitiser runs before both the profile-driven path and the embedded-settings fallback, since both fail on the same input. Defensive fallbacks: returns the original bytes unchanged when the input isn't a valid zip, doesn't contain project_settings.config, has no allowlisted sentinels present, the JSON is malformed, or the config root isn't a dict — so the caller can pass the result on without further checks. Geometry, thumbnails, color, multi-part data, and every other zip entry round-trip byte-identical (the previous full-strip experiment's failure mode can't reoccur). 13 new unit tests in test_project_settings_sentinel_sanitiser.py pin the contract: each allowlisted key removed when value is "-1" (parametrised across the allowlist); multiple sentinels removed at once; allowlisted key with legitimate non-sentinel value ("0") preserved; non-allowlisted key holding "-1" (z_offset) preserved; identity return when nothing needs sanitising; array-form values (per-filament/per-extruder lists) left alone (v1 handles scalar strings only, expand later if needed); other zip entries (model_settings.config, slice_info.config, _rels metadata, geometry) all preserved with byte-identical content; non-zip input passes through; missing project_settings.config passes through; malformed JSON passes through; non-dict JSON root passes through. Adding new sentinel keys: if a future report surfaces another field name in the slicer's <field>: -1 not in range [...] error, add the field to _PROJECT_SETTINGS_SENTINEL_KEYS — the rest of the code stays unchanged.
  • Archive created with wrong plate metadata when consecutive plates of the same model are printed back-to-back (#1204, reported by BurntOutHylian) — Print Plate 2 of any multi-plate project, let it complete, then immediately print Plate 1: the resulting archive was named "MyModel - Plate 2" with Plate 2's filament slots and slicer estimate, even though Plate 1 was the print actually running. Root cause was an MQTT lag in the print_start data: the trigger fires on a gcode_file change (bambu_mqtt.py:2781-2786 — the field carrying /data/Metadata/plate_N.gcode, which is plate-specific and always fresh), but subtask_name (model-level, e.g. "MyModel - Plate 2") can still echo the previous job in the same MQTT batch. The FTP candidate list in main.py:1974 is built from subtask_name first, so the previous Plate 2 upload — still resident on the printer's FTP from the just-completed print — got picked up and fed into archive creation. The 3MF parser then read _plate_index=2 from the wrong file's slice_info.config and locked Plate 2's name + estimate + per-slot filament data into the row at creation, with no follow-up to correct. Reporter BurntOutHylian's diagnosis nailed it: the parser already extracts _plate_index from inside the 3MF (archive.py:154), and parse_plate_id() (printer_manager.py:678) already extracts the plate from gcode_file — those two values just weren't being compared. Fix: new helpers peek_plate_index_in_3mf() (cheap zip read of Metadata/slice_info.config only, returning the plate index) and swap_plate_suffix() (rewrites trailing " - Plate N" or "_plate_N" — both forms appear in real subtask_names, see test_print_start_expected_promotion) in archive.py. After a successful FTP download in _handle_print_start, the new validation block in main.py peeks the downloaded 3MF's plate index, compares against parse_plate_id(filename), and on mismatch retries the FTP fetch with a corrected subtask_name. If the retry finds a 3MF whose plate matches, the wrong file is dropped and the corrected one is used — archive name + estimate + slots all reflect the actual plate. If the retry can't find a matching file (or no swap is possible because subtask_name had no plate suffix to swap), the wrong 3MF is dropped and the existing no-3MF fallback (main.py:2155) creates an archive without metadata; the stale subtask_name is overridden to the corrected one (or cleared so filename wins) so the fallback's print_name at least reflects the right plate rather than locking in a misleading name. The validation only fires when parse_plate_id(filename) returns a value, so single-plate / non-Bambu / cloud-named jobs are unaffected. Defence in depth: the cache eviction is implicit — temp_path.unlink() makes the wrong-file cache entry self-clean on next access via the existing get_cached_3mf evict-on-miss path (bambu_ftp.py:660-664); no separate cache invalidation needed. 17 new unit tests in test_archive_plate_validation.py pin the helpers: peek_plate_index_in_3mf returns the index for a valid 3MF, None for missing slice_info, None for missing index metadata, None for non-zip files, None for missing files, None for non-integer index values; swap_plate_suffix handles the spaced "Plate N" form (capitalised + lowercase + tight-hyphen), the underscored "_plate_N" form (the Box3.0_(2)_plate_5 case from the existing fixture), case-insensitive matching, returns None for names without a recognised suffix, returns None for None input, and preserves separator casing so the corrected name matches what BambuStudio actually uploaded.
  • SpoolBuddy kiosk screen never blanked while a load cell was producing noisy readings (reported during user testing) — A noisy HX711 / load-cell mount that bounced the reported weight by ≥50 g around its midpoint kept the kiosk display permanently lit. The wake gate in spoolbuddy/daemon/main.py:scale_poll_loop (WAKE_THRESHOLD = 50) checked the absolute change against last_wake_grams and, on every trip, advanced last_wake_grams to the new noisy reading — so the next bounce back also exceeded the threshold, fired display.wake() again, and the screen never stayed off long enough for swayidle's wlopm --off HDMI-A-1 to mean anything. Symptom in the field: ~3–30 s between Wake signal sent via FIFO log lines, exactly correlated with the bigger noise spikes, screen flicker-blanking and immediately turning back on. Diagnosis from a real device's journalctl -u spoolbuddy.service: scale/reading POSTs every ~1 s (REPORT_THRESHOLD=2 g, so the load cell was reporting ≥2 g changes constantly) interleaved with periodic wake signals. Fix: the wake gate now requires the scale's stable flag (True only when consecutive readings agree within 2 g over a 1 s window — already produced by ScaleReader.read() and previously only forwarded as telemetry to the backend). Unstable noise can no longer fire wake AND can no longer poison last_wake_grams, since the threshold check + the assignment are both gated on stable. Real spool placements / removals produce a settled post-event reading and continue to wake the screen as intended. 3 new regression tests in spoolbuddy/tests/test_main.py::TestScalePollLoopWakeGating: noisy ±60 g unstable readings never wake (the original bug); a settled >50 g jump wakes; a noise burst between two settled readings doesn't poison last_wake_grams (asserts the second stable wake still fires from the original baseline rather than the noisy peak).
  • Print-complete notification reported the slicer's pre-print estimate instead of the actual elapsed time (#1198, reported by BurntOutHylian) — _background_notifications in main.py:3434 built archive_data for the completion notification with print_time_seconds (the slicer's estimate parsed from the 3MF at archive creation), and notification_service.py:909-910 then formatted that field straight into the {{duration}} template variable. Net effect: a print cancelled 2 minutes into a 3-hour estimate told the user "duration: 3h" — wrong by orders of magnitude for any cancellation, abort, slow first layer, or any print whose actual elapsed diverged from the slicer's guess. The companion field actual_filament_grams was already scaled by progress for partial prints (line 3445), so filament was right while time was wrong. The print_start notification uses a separate {{estimated_time}} variable (line 838), so {{duration}} semantically should always have meant "actual elapsed" — it was just being read from the wrong source. Two-part fix: (1) main.py:3434 now computes actual_time_seconds = int((archive.completed_at - archive.started_at).total_seconds()) from the persisted timestamps when both are present and the elapsed is positive, and adds it as a new key in archive_data; notification_service.py:909-916 prefers actual_time_seconds and falls back to print_time_seconds only when timestamps weren't recorded (so the notification still has something if the elapsed can't be derived). (2) main.py:3172 adds "cancelled" to the set of statuses that get completed_at set when update_archive_status runs — pre-fix only completed, failed, aborted got a timestamp, but cancelled (Bambuddy queue UI cancellation, distinct from touchscreen-aborts which already set completed_at) was deliberately excluded for reasons that no longer hold. Audited every completed_at consumer in backend (archives.py:80, 333-337, 768-770, 723-731, 1722-1813, main.py:3229, projects.py:1475, 1489) and frontend (PrintersPage.tsx:2854, QueuePage.tsx:1053, StatsPage.tsx:902); none rely on completed_at IS NULL to mean "this is a cancelled print" — the three explicit-status filters already restrict to status == "completed" and the rest are completed_at or created_at fallback expressions that gracefully accept either. Knock-on benefit: the statistics-totals aggregation at archives.py:723-731 (which currently adds the full slicer estimate to the total when completed_at IS NULL) now adds the actual elapsed for cancelled prints too — a 2-minute cancellation contributes 2 minutes instead of 3 hours. Existing cancelled rows in the DB stay with completed_at=NULL; only new cancellations going forward get the timestamp. 3 new regression tests in test_notification_service.py::TestNotificationVariableFallbacks pin the contract: {{duration}} reflects actual_time_seconds when present (2m elapsed wins over 3h estimate), falls back to print_time_seconds when actual is missing (1h estimate still surfaced rather than "Unknown"), and surfaces "Unknown" when both are absent.
  • Frontend served behind a path-prefixed reverse proxy (e.g. /bambuddy/ on Traefik / nginx / Cloudflare Tunnel) loaded a blank page (#1195, reported by Spegeli, follow-up to #1167) — Vite's default base: '/' emits absolute asset URLs in the built index.html (/assets/index-*.js, /assets/index-*.css, /manifest.json, /img/..., /sw-register.js), which assumes the SPA is always served at the host root. Behind any path-prefixed reverse proxy — Traefik with a path prefix, nginx location /bambuddy/, Cloudflare Tunnel with path routing, Synology / Unraid reverse-proxy panels — the browser then requests those absolute paths from the host root, the proxy doesn't see them, and the upstream serves either a 404 or HTML for an unknown path with Content-Type: text/plain/text/html; the browser logs Refused to apply style from '.../assets/index-*.css' because its MIME type is 'text/plain' and renders a blank white page. Two-line fix: frontend/vite.config.ts sets base: '' so Vite's HTML transform rewrites every absolute asset reference to relative (./assets/..., ./manifest.json, ./img/..., ./sw-register.js) — these resolve correctly against whatever subpath the document was served from. frontend/public/sw-register.js is a public-dir file Vite copies as-is, so its navigator.serviceWorker.register('/sw.js') call is changed to register('sw.js') (relative); the SW scope is automatically pinned to whatever subpath the document loaded from, which is exactly what every reverse-proxy-at-subpath user wants. Net effect: an https://example.com/bambuddy/ deployment now loads correctly without any frontend rebuild on the user's side. Out of scope for this change: runtime API base detection — API_BASE = '/api/v1' in frontend/src/api/client.ts is still absolute, so API calls still go to the host root. This is intentional. The fix above closes the immediate "blank page" report; making the API base, React Router basename, PWA manifest scope, and service-worker scope all subpath-aware would mean rewriting how the SPA bootstraps and would touch PWA-install state, push-notification subscriptions, and deep-link reload semantics. The supported way to embed Bambuddy in Home Assistant remains the Webpage panel + TRUSTED_FRAME_ORIGINS path documented in the wiki — Bambuddy reachable on a stable URL (HTTP for HTTP-only HA, HTTPS via your own reverse proxy for HTTPS HA / Nabu Casa / custom-domain), iframe-embedded via the HA dashboard. HA Ingress / addon-based subpath embedding (which would require the runtime path detection above) is not supported by core. Documented explicitly in docker.md so users hit the right pattern first.
  • iframe embedding from trusted origins (e.g. Home Assistant Webpage panel) no longer blocked (#1191, reported by azurusnova) — Bambuddy ships strict anti-clickjacking headers (X-Frame-Options: SAMEORIGIN and CSP frame-ancestors 'none') by default, which protects internet-exposed deployments from being embedded by hostile sites. But it also broke a documented integration path: Home Assistant's Webpage dashboard panel embeds Bambuddy via <iframe> on a different origin (HA on :8123, Bambuddy on :8000), and the SAMEORIGIN value is port-strict, so even same-LAN trusted setups got "refused to connect". A new TRUSTED_FRAME_ORIGINS env var takes a comma-separated list of scheme://host[:port] origins; when set, the middleware drops X-Frame-Options (modern browsers honor frame-ancestors, and the legacy ALLOW-FROM <url> syntax is deprecated and inconsistent across vendors) and the CSP frame-ancestors directive becomes 'self' <origin> <origin>.... The default — empty env var — keeps the strict 'none' behavior, so Docker / bare-metal users without HA see no behavioural change. Origin validation happens at startup: only http:// and https:// are accepted, paths/query/fragments/wildcards are rejected with a warning (one bad entry doesn't take the deployment down — it's just dropped from the allowlist). The gcode-viewer route's frame-ancestors 'self' (same-origin embed for the in-app gcode preview iframe) also includes the allowlist when configured, so HA users embedding Bambuddy can still open the gcode viewer modal. 16 new tests in test_security_headers.py: 12 unit tests for the env-var parser (empty / unset / single / multiple / whitespace / empty-segment / non-http scheme dropped / missing host dropped / path dropped / query+fragment dropped / wildcard dropped / trailing-slash kept) and 4 integration tests for the middleware (default-strict emits SAMEORIGIN + 'none', allowlist relaxes CSP and drops X-Frame-Options, /docs branch also honors the allowlist, other security headers like X-Content-Type-Options and Referrer-Policy are unaffected in both modes). Documented in the Docker env-var reference page on the wiki and in .env.example.
  • Virtual Printer queue mode auto-dispatched onto the wrong colour when multiple compatible printers were available (#1188, reported by EdwardChamberlain) — Sending a sliced 3MF to a queue-mode VP via Orca / Studio with auto-dispatch on caused Bambuddy to schedule the job onto a printer of the right model but the wrong loaded filament: a print sliced for matte white PLA would land on a printer with no white loaded, and the printer would start the job using whatever was the closest available match. Edward's diagnosis was exact (virtual_printer/manager.py:325-326): the manual /api/v1/print-queue/ POST flow extracts the 3MF's per-slot filament requirements at queue-add time and writes required_filament_types, filament_overrides, and ams_mapping on the resulting PrintQueueItem, so the scheduler's color-match enforcement (print_scheduler.py:512 — keys on filament_overrides[].force_color_match === true) actually runs. The VP queue-write path (_add_to_print_queue) skipped all of that and built a bare PrintQueueItem with only printer_id, target_model, archive_id, plate_id, position, status, manual_start. Net effect: the scheduler reached the model-only-matching fallback and accepted the first available printer of the target model regardless of loaded colour, exactly as he described. Fix: the scheduler's existing _get_filament_requirements 3MF parser is extracted into a shared helper (backend/app/services/filament_requirements.py:extract_filament_requirements) so the VP path can reuse it at upload time. The VP's _add_to_print_queue now calls that helper after archiving and populates required_filament_types unconditionally (cheap; helps the scheduler reject obvious type mismatches even without force_color_match); and writes filament_overrides with force_color_match: true per consumed slot when a new per-VP setting queue_force_color_match is on. Default is off to preserve current behaviour for upgraders — a fresh-install user who wants the bug-free behaviour flips the toggle once on the VP card; an existing user gets exactly the model-only-matching they had before until they opt in. Auto-dispatch onto the wrong material happens loudly enough that anyone affected can find the toggle. Why default-off rather than default-on: existing automation that relies on "send to queue VP, get printed somewhere" without caring about colour shouldn't silently start blocking on colour matching after an upgrade. The toggle has clear UI copy (virtualPrinter.queueForceColorMatch) explaining the trade-off. Defence in depth: a malformed or unparseable 3MF (e.g. fake bytes from a misconfigured upload tool) leaves both fields None and the scheduler falls back to model-only matching, matching pre-fix behaviour for the unhappy path. The scheduler itself is unchanged — it already handled force_color_match correctly when the field was populated; the bug was purely the VP path not populating it. Schema: one nullable column virtual_printers.queue_force_color_match BOOLEAN DEFAULT 0/FALSE (Postgres-safe) added via the existing _safe_execute migration pattern. API: VirtualPrinterCreate and VirtualPrinterUpdate Pydantic schemas + _vp_to_dict response shape carry queue_force_color_match, the create + update routes wire it through to the model, and VirtualPrinterInstance constructor + multiVirtualPrinterApi TypeScript client mirror the field. UI: new toggle on VirtualPrinterCard rendered only when mode === 'print_queue' (parallels the existing auto_dispatch toggle's mode-gating), with pendingAction state for the in-flight indicator. i18n: new virtualPrinter.queueForceColorMatch.{title,description} keys in all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 11 new tests: 8 in test_filament_requirements.py covering the extracted parser end-to-end (per-slot dicts, zero-use slots filtered, plate filtering, no-plate flat-walk fallback, unparseable / missing / config-less files, sorted output); 3 in test_virtual_printer.py::TestVirtualPrinterInstance covering the VP write path (setting-off → only required_filament_types populated; setting-on → filament_overrides populated with force_color_match: true per slot; unparseable 3MF → both fields None, no crash). Existing scheduler tests still pass against the refactored helper (verified end-to-end across the scheduler / virtual_printer / print_queue / filament test suites — 479 tests). Edward's "out of scope nice-to-have" suggestion of a "Requires Color Match" pill on queue cards is deferred to a follow-up so this PR stays scoped to his repro.
  • Slicing a library file via API key fails with "no Bambu Cloud session is stored" even when the key has cloud access (#1182 follow-up, reported by turulix) — Tim shipped the headless slicing pipeline #1182 was filed for, then hit a second wall: GET /api/v1/cloud/settings returned the cloud preset IDs correctly (the /cloud/* router-level gate from #1182 was doing its job), but POST /api/v1/library/files/{id}/slice with those IDs in the request body failed the slice job with error_status: 400, error_detail: "Cloud preset selected for printer, but no Bambu Cloud session is stored. Sign in to Bambu Cloud and retry." Cause: the /cloud/* fix routes the API key's owner User through cloud_caller (a router-level gate stashes the owner on request.state.api_key_owner, route-level deps pull it back out), but the slice route lives on /library/* — different router, no gate, so when the auth dep returned None for the API-keyed request the slice route passed current_user_id=None straight through to _run_slicer_with_fallback_resolve_cloud(db, user=None)get_stored_token(db, None), which falls back to the auth-disabled global Settings table. That table is empty in auth-enabled deployments, so cloud preset resolution failed even though the key's owner User had a perfectly valid cloud_token on their User row. Fix is a new route-level dep resolve_api_key_cloud_owner in cloud.py that's permissive (returns the owner User if the key has can_access_cloud=true, otherwise None — never raises) so it can be safely added to non-/cloud/* routes without breaking the local-presets path: a request with an API key that lacks the cloud scope still slices fine against local presets, and only fails with the existing "no Bambu Cloud session" error if it actually selects a cloud preset. Wired into POST /library/files/{id}/slice (Tim's blocker) and GET /slicer/presets (the SliceModal preset dropdown source — same root cause, would have hit anyone using the UI through an API-keyed reverse proxy). Both routes now resolve the cloud-token owner via current_user or api_key_cloud_owner instead of current_user.id if current_user else None. The auth gate's None-return for API keys is unchanged — keeping that fix scoped to the routes that actually need cloud-token resolution prevents accidental scope creep into other routes that fence on current_user is None. 4 new integration tests in test_api_key_cloud_access.py::TestSliceRouteCloudOwnerResolution pin the dep contract: returns the owner for a key with can_access_cloud=True and a valid owner; returns None for an owned key without the cloud scope (so cloud presets still 400 cleanly, local presets still slice); returns None for legacy ownerless keys; no-op for JWT and anonymous callers.
  • Project cover photo thumbnail too small to recognise the print (#1155 follow-up, reported by smandon) — The 40×40 thumbnail smandon's MakerWorld download workflow relied on for "is this the model I'm looking for?" wasn't readable at that size; he asked for either a larger thumbnail or a click-to-enlarge full preview. Enlarging the thumbnail itself would shift the card layout and cost the dense grid he chose to use for browsing many projects, so the fix keeps the 40×40 thumbnail and shows a portal-mounted 384×384 popover on hover. The popover renders the full image in object-contain so tall portrait MakerWorld photos aren't cropped to a square, has pointer-events-none so it can't intercept hover and create a flicker loop, and z-[100] so it stacks above every sibling card in the grid. Why a portal: ProjectCard carries overflow-hidden (for its rounded-corner clipping and the color accent bar), so an in-tree popover gets clipped by the card the moment it extends past the card's bounds — exactly the cut-off behaviour smandon reported on the second iteration. Rendering via createPortal(..., document.body) escapes every ancestor clipping context, and position: fixed with measurements from getBoundingClientRect() keeps the popover pinned next to the thumbnail regardless of where the card sits in the grid. Edge handling: if the thumbnail is near the viewport's right edge the popover flips to the LEFT side of the thumbnail; vertical position is clamped so the popover never overflows the window top or bottom. The thumbnail's own onClick is stopPropagation'd so hovering the popover area never accidentally triggers the parent card's "open project" navigation. 2 new tests in ProjectsPage.test.tsx pin the contract: hovering mounts the popover at document.body level (not nested in the card — a future refactor that drops the portal would re-introduce the clipping bug, and the test catches that); leaving unmounts it; the popover img points at the same cover-image URL as the small thumbnail with object-contain; cards without a cover_image_filename never mount the portal-rendering component (so a hover doesn't flash an empty preview).
  • Spool edit form lost the Extra Colours value on reopen, Dual Color rendered identically to Gradient, and the Sparkle / checkerboard visuals were too subtle (#1154 follow-up, reported by maugsburger) — Four issues against the multi-colour swatch work that landed for #1154. (1) Extra Colours input didn't hydrate on edit reopen: ColorSection's draft buffer was seeded once via useState(formData.extra_colors), but SpoolFormModal opens before its own useEffect populates formData from the spool record — so by the time the saved value landed, the input's local state had already been initialised to '' and never re-synced. The COLOR preview banner above the input rendered correctly (consumes formData directly), making it obvious the data WAS persisted; only the input was stuck blank, which the user then had to retype to save anything else. Fix: a ref-guarded useEffect resyncs extraColorsDraft when formData.extra_colors changes via an external update (e.g. modal opening with a spool); the ref is updated inside commitExtraColors so the user's own typing is round-tripped without the resync clobbering it. (2) Dual Color and Gradient produced the same diagonal blend: buildColorLayer in filamentSwatchHelpers.ts ran the same linear-gradient(135deg, ...) for both effect types, so a "Dual Color" spool was visually indistinguishable from a "Gradient" one. Real dual-colour spools have two distinct bars on the reel — that's the whole point of the variant. Fix: when effect_type is dual-color or tri-color, build the colour layer as linear-gradient(to right, c1 0% X%, c2 X% Y%, ...) with CSS double-position stops (so the colour change is a hard line rather than a blend region) and equal-width segments across the stops; gradient keeps the original 135° smooth blend. The existing multicolor conic-gradient path is untouched. (3) Sparkle effect was almost invisible on card-sized swatches: the original 4-dot pattern (each ~1px) read fine on the small inline swatch but disappeared on the 60-pixel-tall inventory card banners — exactly where the user actually identifies a spool. Bumped to 13 flecks in mixed sizes (1px / 1.5px / 2px) and varying opacity (0.65 → 1.0) to give a depth-of-field "metal flake" feeling, distinct from solid + multi-colour. (4) Checkerboard cell density scaled with the swatch: the previous helper put repeating-conic-gradient(...) in the background-image and the caller applied background-size: cover, so the same 4-cell pattern was either tiny squares on a small swatch or four huge squares on a card-sized banner. Made buildFilamentBackground() return { backgroundImage, backgroundSize } with per-layer sizes — painted layers stay cover, the checkerboard gets a fixed 12px tile so the cell density stays consistent regardless of element size and clearly reads as a transparency indicator rather than a multi-colour stripe. Updated the three existing call sites (InventoryPage group banner + spool card, ColorSection preview) to spread the returned style object directly. 8 new frontend tests cover the four fixes: hard-split contract for Dual/Tri Color (3 tests + 1 regression guard that Dual ≠ Gradient for the same stops); Sparkle prominence (≥ 10 distinct radial-gradient layers in the rendered background); checkerboard density (last backgroundSize layer is a fixed pixel value, not cover); 4 hydration tests pinning the input restore path (fills when formData arrives via parent update, resyncs when the spool changes mid-form, doesn't clobber live user typing, clears when the new spool has no extra_colors).
  • Pending review card and the resulting archive name disagreed; .gcode.3mf filename suffix wasn't fully stripped (#1152 follow-up, reported by smandon) — Two distinct holes in the original #1152 fix surfaced when smandon retested on the daily build. (1) Suffix stripping was incomplete: Bambu Studio's "Send to printer" dialog typically writes files like Plate_1.gcode.3mf (a sliced gcode payload wrapped in a 3MF container), but the archive's display stem was computed via Path(name).stem, which only drops the last suffix and left the user staring at Plate_1.gcode in the archive UI. (2) The review card and the archive disagreed on what the print was called: the pending-uploads panel always rendered the raw FTP filename, while the eventual PrintArchive.print_name resolved from the 3MF's embedded title (or, with the toggle on filename, the filename stem). Net effect: the user saw Plate_1.gcode in the review card and Some Creator's Title in the archive grid for the same item, with no toggle that flipped both views in lockstep. Fix has three pieces: a new resolve_display_stem() helper in archive.py that strips .gcode.3mf / .3mf / .gcode (case-insensitive) so both the archive and the review-side normalisation produce the same canonical stem; a new PendingUpload.metadata_print_name column populated at FTP-receive time by peeking at the 3MF's embedded title (so /pending-uploads/ list calls don't have to reopen every 3MF on every render); and a new PendingUploadResponse.display_name computed field that mirrors archive_print's exact precedence — filename toggle: stripped stem; metadata toggle (default): cached title or stripped stem. Frontend's PendingUploadsPanel reads upload.display_name (with upload.filename as a defensive fallback for any pre-migration row), and the raw filename is exposed as a tooltip so users can still inspect what actually arrived over FTP. Migration is one idempotent ALTER TABLE pending_uploads ADD COLUMN metadata_print_name VARCHAR(255) (Postgres/SQLite-safe); existing pending rows have NULL there and gracefully fall back to filename-stem behaviour. 14 unit tests pin the stripping rules (Plate_1.gcode.3mfPlate_1, mixed case, dots in the middle, edge .3mf-only / .gcode-only, full-path inputs); 6 integration tests pin the response contract (default toggle uses metadata title when present, falls back to stripped stem when absent, filename toggle overrides metadata, filename toggle still strips the double suffix, GET /{id} exposes the same field, whitespace-only metadata behaves like absent); 3 frontend tests pin the review card's render path (resolved name shown, fallback to filename when display_name is empty, raw filename available via tooltip).
  • SpoolBuddy SSH update fails with "permission denied for user spoolbuddy" after Bambuddy keypair rotation (reported during user testing) — Bambuddy's data dir at <DATA_DIR>/spoolbuddy/ssh/ can get recreated outside the daemon's control (volume remount, container recreate, fresh deploy), at which point get_or_create_keypair() generates a new ed25519 keypair. The SpoolBuddy daemon previously only fetched and deployed Bambuddy's public key at registration time (/devices/register), so any rotation after a successful registration left the device's ~/.ssh/authorized_keys pointing at a defunct public half — every "Update" click from the Bambuddy UI then failed with Connection closed by authenticating user spoolbuddy [preauth] until the daemon was restarted manually. Worse, every prior successful registration appended a fresh entry to authorized_keys without ever pruning the old one, so a typical device accumulated 5+ stale Bambuddy-tagged keys (each one a permanent backdoor for whichever Bambuddy keypair held the matching private half at the time it was deployed). Two-pronged fix: (1) the heartbeat response (HeartbeatResponse, routes/spoolbuddy.py:282) now carries the current ssh_public_key alongside the existing pending_command / calibration fields, so the daemon's heartbeat picks up a key rotation within one cycle instead of needing a service restart; the same try/except Exception: pass pattern as the registration response keeps a missing/unreadable backend key from breaking telemetry. (2) _deploy_ssh_key() in daemon/main.py now syncs rather than appends — it strips every line tagged bambuddy-spoolbuddy, writes the current key once, and is a no-op when already in sync (so it doesn't churn the file every heartbeat). User-managed entries (any line not tagged bambuddy-spoolbuddy) are preserved untouched. 5 new unit tests in spoolbuddy/tests/test_deploy_ssh_key.py (creates-when-missing → mode-600 file with the current key; pile-up-of-stale-keys → only current key remains, no growth; preserves-unrelated-user-keys → user's own SSH access untouched; idempotent-when-in-sync → no mtime change so heartbeat doesn't churn the file; swallows-write-errors → readonly-fs PermissionError doesn't crash the heartbeat loop). 2 new backend integration tests in test_spoolbuddy.py::TestDeviceEndpointstest_heartbeat_returns_ssh_public_key (response carries the key on every heartbeat) and test_heartbeat_ssh_key_failure_does_not_break_heartbeat (backend key-read failure leaves ssh_public_key: None but the heartbeat still 200s).
  • External-camera frames returned as black on go2rtc and other MJPEG sources (#1177, reported by nkm8) — _capture_mjpeg_frame returned the very first JPEG it found in the stream's bytes (backend/app/services/external_camera.py:282), but many MJPEG sources — go2rtc most notably, and several IP cameras — emit a "warm-up" frame on the byte that follows connection accept: usually the last keyframe held in the encoder, which is often black or stale until the encoder catches up to live content. Subsequent frames on the same connection are fine. The reporter saw it across snapshot UX, finish photos in notifications, and timelapse — every code path that opens a fresh capture connection (snapshot endpoint, [PHOTO-BG] finish photo, plate-detection CV, Obico ML inference, layer timelapse, Settings → Test). His own observation that go2rtc's /api/frame.jpeg (single-frame, internally already warmed) is never black while the first frame off /api/stream.mjpeg is, matched the hypothesis exactly. Support-bundle evidence was clean: every black notification frame in his log was 11095 bytes (a pure-black 1280×720 JPEG encodes to ~10–15 KB on standard libjpeg quality settings), while every captured-after-warm-up frame from the same source was 30–45 KB. Fix: read past the first frame and return the second; if the connection closes / times out / hits the 5 MB buffer cap before a second frame ever arrives, fall back to the first so callers still get something (degrading slow / single-frame streams to None would regress every code path that relied on pre-fix behaviour). The inner-loop now drains every complete frame already in the buffer before pulling the next chunk so high-FPS sources that pack multiple frames per chunk are handled correctly. The snapshot / rtsp / usb capture paths and the live-view streaming endpoint (generate_mjpeg_stream) are untouched. 7 new regression tests in test_external_camera.py::TestCaptureMjpegFrameWarmupSkip cover (a) two-frames-in-two-chunks → second returned, (b) two-frames-in-one-chunk → second returned, (c) frame split across chunk boundary → assembled correctly, (d) single-frame stream → first returned via fallback (no None regression), (e) timeout after first frame → first returned via fallback, (f) zero-frame stream → None, (g) non-200 status → None. Latency penalty: at most one frame interval (typically 50 ms – 1 s on a steady stream). Follow-up: optional snapshot URL override — nkm8 retested on the daily build and saw the warm-up skip help most of the time but the black-frame symptom still surfaced intermittently on his go2rtc setup, with the same workflow break (notification thumbnails black, snapshot UX black). His own bisect already pointed at the cleanest fix: go2rtc exposes /api/frame.jpeg as a dedicated single-frame endpoint that never returns the encoder's warm-up keyframe, while /api/stream.mjpeg always does on a fresh connection. New optional external_camera_snapshot_url column on printers (idempotent ALTER TABLE migration via _safe_execute, plumbed through PrinterBase / PrinterUpdate / PrinterResponse / from_orm_with_roi / TypeScript Printer + PrinterCreate); when set, every single-frame capture path (/api/v1/printers/{id}/camera/snapshot, [SNAPSHOT] notification thumbnails, [PHOTO-BG] finish photo, layer timelapse on every captured layer, Obico ML snapshot, plate-detect / calibrate-plate CV) routes through _capture_snapshot() on the override URL via plain HTTP GET, bypassing the warm-up-frame dance entirely. The override is camera-type-agnostic — set it once on the printer config and it applies regardless of whether the live stream is mjpeg / rtsp / usb. Live-view (the /camera/stream and /camera endpoints powering the in-app viewer) deliberately stays on the configured stream URL — the override only changes single-frame captures, since a 1 fps poll-the-snapshot-endpoint live view would be a regression for everyone who doesn't have this problem. Settings UI (Settings → General → External Cameras) renders a new "Snapshot URL (optional)" input with its own Test button below the live-stream URL row; the input is hidden when camera_type === 'snapshot' since the live URL is already a single-frame endpoint and the override would be redundant. SSRF guard on the override is the existing _sanitize_camera_url("http", "https") allowlist — link-local / metadata / blocked hosts return None instead of being fetched. Empty-string override is treated as unset (defence in depth — a stale config row that somehow has "" rather than NULL still routes through the live stream rather than firing GET against an empty URL). 5 new backend tests in test_external_camera.py::TestSnapshotUrlOverride (override routes to snapshot path; no override → camera-type handler; empty string → camera-type handler; SSRF guard on metadata-target override returns None; override is camera-type-agnostic across rtsp/usb). 3 new frontend tests in SettingsPage.test.tsx (input renders for mjpeg/rtsp/usb camera types; hidden for snapshot type; debounced PATCH carries external_camera_snapshot_url when the user types). i18n: settings.cameraSnapshotUrl{,Placeholder,Help} in en + de fully translated, the other 6 locales (fr/it/ja/pt-BR/zh-CN/zh-TW) seeded with English copies pending native translation. Documented under bambuddy-wiki/docs/features/camera.md with the go2rtc example URL as a tip block.
  • MakerWorld sidebar entry visible to every user regardless of group permissions (#1175) — Backend already enforced makerworld:view on every /makerworld/* route (backend/app/api/routes/makerworld.py:145, 157, 242, 406), the permission was correctly granted to the admin and standard-user role defaults (permissions.py:298, 364, 454), and the frontend Permission type union already included 'makerworld:view' | 'makerworld:import' (client.ts:2498) — but the sidebar's hand-maintained navPermissions map in Layout.tsx:278 had no entry for makerworld, so isHidden('makerworld') always returned false and the entry rendered for every authenticated user. Users without the permission saw the entry, clicked, and the page rendered while every API call inside it 403'd. Two-line fix: (1) Layout.tsx:278 — add makerworld: 'makerworld:view' to the map, matching every other sidebar entry's gating shape; (2) App.tsx:200 — wrap the route in <PermissionRoute permission="makerworld:view"> for defence in depth, so a user who knows the URL can no longer reach the page directly (matches the existing pattern on settings, groups/new, groups/:id/edit two lines below). 2 new Layout tests pin the contract: with auth enabled and a user lacking makerworld:view, the sidebar <a href="/makerworld"> link is absent (other links like /files still render); with the permission granted, the link renders.
  • Printer Info modal: serial-number and IP-address copy buttons silently did nothing on plain-HTTP LAN deployments (#1174, reported by BurntOutHylian) — PrinterInfoModal's CopyButton only tried navigator.clipboard.writeText(), which is gated by the secure-context requirement (HTTPS or localhost). On the typical Bambuddy deployment shape — bare-IP HTTP on the LAN — navigator.clipboard is undefined; the existing try/catch swallowed the resulting TypeError, the icon never flipped to the tick, and nothing landed on the user's clipboard. Fixed by adding the same off-screen-textarea + document.execCommand('copy') fallback that CameraTokensPage's plaintext-token modal already uses for plain-HTTP LAN deployments: gate on navigator.clipboard && window.isSecureContext, fall back to the legacy path otherwise, and surface the success-tick only when the copy actually landed (return early without flipping copied if execCommand('copy') returns false). The try/finally around the textarea guarantees DOM cleanup even when the browser throws on a restricted context. 3 new component tests in PrinterInfoModal.test.tsx cover (a) secure-context happy path uses navigator.clipboard.writeText, (b) plain-HTTP fallback path actually invokes execCommand('copy') and leaves no leaked textarea in the DOM, (c) finally cleanup removes the textarea even when execCommand throws synthetically. Thanks to BurntOutHylian for the precise file/line pointer in the report.
  • Queue auto-dispatched the next print onto a fouled bed after an aborted or cancelled print (#1171, reported by tom5677) — When a print ended with status aborted (printer self-abort, or a user stopping the print on the printer's own touchscreen) or cancelled (user stopping the print via the Bambuddy queue UI), the plate-clear gate added in #961 was not raised — only completed and failed triggered it (backend/app/main.py:2660). Result: the queue scheduler dispatched the next pending item ~2 seconds after the abort, with the previous print's material still on the bed. The reporter saw two prints (P1P + P1S) auto-start onto fouled beds within seconds of each other after touchscreen-aborts, and explicitly flagged the risk of damage to the printer; a third printer (his second P1S) behaved correctly because its previous print had ended completed. The original code's comment ("user-cancelled prints don't require a plate-clear ack — nothing printed on the bed") only holds if you cancel right at layer 1; cancelling a 12-hour print at hour 11 leaves a fouled bed too. Fix: the gate is now raised for every terminal status — completed, failed, aborted, cancelled — matching the safety contract that the user must acknowledge the bed is clear before any next queued print starts. The gate is user-clearable on the Printers page, so worst case for a layer-1 cancel the user clicks "Clear Plate" once. Touchscreen-aborts are particularly important to gate because Bambuddy's "user stopped via UI" override (_user_stopped_printersaborted mapped to cancelled) only fires when the user stops via the Bambuddy queue; a touchscreen-stop reports aborted straight through. Regression coverage in test_print_lifecycle.py::TestPlateClearGate: parametrised across all four terminal statuses (asserts set_awaiting_plate_clear(printer_id, True) is called for each), plus a defence-in-depth test that an unrecognised future status string never silently raises the gate.
  • Printer card always shows the first plate's thumbnail when printing a multi-plate 3MF (#1166, reported by smandon) — On printers running firmware that drops the plate path from print.gcode_file (the reporter's case: P1S 01.10.00.00, but the same shape appears on other firmware revisions), the printer reports gcode_file: MyModel.3mf instead of gcode_file: /Metadata/plate_4.gcode. The /printers/{id}/cover route's regex (plate_(\d+)\.gcode) found nothing in the bare .3mf filename, defaulted to plate 1, and the printer card showed Metadata/plate_1.png from the 3MF — even though the user dispatched plate 4. Same problem hit current_plate_id on the status response (printer card detail row showed plate 1). Two-pronged fix on a precedence ladder: (1) Bambuddy now records the plate it dispatchedstart_print() writes (dispatched_plate_id, dispatched_subtask) onto PrinterState at publish time, and a new resolve_plate_id(state) helper prefers that record over the gcode_file regex when dispatched_subtask == state.subtask_name (the subtask check rejects stale entries from a prior Bambuddy-dispatched print bleeding into a Studio-direct dispatch). (2) After the 3MF lands on disk, the cover route scans the zip for a unique Metadata/plate_*.gcode entry: per-plate archives sliced separately in Bambu Studio bundle thumbnails for every plate but only the active plate's gcode, so a single match unambiguously identifies the plate even when no Bambuddy dispatch exists (Studio-direct flow). Final fallback is plate 1, unchanged. The cover-byte cache key was also simplified — plate_num was removed from the key now that resolution is late-bound; clear_cover_cache() already runs on every print start, so different plates of the same project always re-fetch a fresh thumbnail. Coverage: 5 unit tests in test_printer_manager.py::TestResolvePlateId (dispatch precedence, stale-subtask guard, gcode regex fallback, default-1 path, missing-subtask guard), 4 unit tests in test_bambu_mqtt.py::TestStartPrintRecordsDispatchedPlate (dispatch record set/cleared/overwritten/skipped on disconnect), 2 integration tests in test_printers_api.py (dispatch wins over plate-1 default; 3MF-scan fallback for per-plate archive without dispatch). Studio-direct multi-plate prints (no dispatch record AND multiple plate gcodes in the 3MF) still default to plate 1 — matches the firmware's own ambiguity, not regressed by this change.
  • AMS slot configuration intermittently fails to reach the printer after several configs in a row (#1164, reported by RosdasHH) — Configuring AMS slots a handful of times (the reporter saw it almost every 6th change) would silently stop reaching the printer; ~1 minute later the filament colours on the printer would briefly jump between slots, then settle. Root cause was the zombie-session watchdog at bambu_mqtt.py:861 introduced for #887. When an ams_filament_setting response took >10 s (normal under load — concurrent K-profile fetches, busy printer, network jitter) the watchdog incremented an _ams_cmd_unanswered counter and zeroed _last_ams_cmd_time so it wouldn't re-trigger on the next status push. The bug: the response handler that reset the counter was guarded by and self._last_ams_cmd_time > 0 — so when the late response did arrive (after the watchdog had already zeroed the timer), the counter stayed armed at 1. The next slow response on any ams_filament_setting command — possibly minutes or hours later, on an entirely unrelated config attempt — would take the counter to 2 and trigger force_reconnect_stale_session(). The user-visible symptoms match exactly: configs stop landing (because MQTT reconnects mid-publish, dropping the in-flight command and surfacing as Cannot set AMS filament setting: not connected if the user retries during the ~1 min reconnect window), then the queued state finally lands when the reconnect completes (the "filament colours jumping around" the reporter described). Fix is to drop the _last_ams_cmd_time > 0 guard: any ams_filament_setting response — late or not — proves the channel is alive, so the counter must reset. Watchdog still trips on a real zombie session (no responses at all for two consecutive >10 s windows). Regression test in test_bambu_mqtt.py::TestZombieSessionDetection::test_late_response_after_watchdog_clears_counter_issue_1164 simulates the exact sequence (watchdog fires → late response arrives → second slow response on a fresh command) and asserts the counter resets to 0 on the late response and the second command doesn't tip the threshold to 2. Other 10 zombie-detection tests still pass unchanged. Follow-up: cumulative session wedge after ~16-20 commands — the watchdog fix above heals real zombie sessions, but RosdasHH continued to see the wedge fire on healthy sessions after enough cumulative commands (configs + spool assignments share the same threshold: "8 + 3", "12 + 1", "16 + 0" all tripped it). His QoS=1 vs QoS=0 vs QoS=2 bisect was the breakthrough — the wedge only happens at QoS=1. paho-mqtt's default max_inflight_messages is 20, and Bambu's broker has racy PUBACK matching that leaves some inflight slots unreleased per session, so after ~16-20 cumulative commands the queue silently fills and publish() returns success while packets sit in paho's internal queue (force_reconnect heals it because the inflight queue is per-session — the printer had already processed every command, it just couldn't receive any new ones until the session reset). Lifted the ceiling to 1000 via client.max_inflight_messages_set(1000) immediately after mqtt.Client() construction (bambu_mqtt.py:3074-3079). Keeps QoS=1 untouched (the cross-model reliability we deliberately chose for AMS configuration — A1, P1S, X1C, H2D, P2S, X2D all need it) and removes the ceiling as the bottleneck without changing wire-protocol behaviour. The watchdog reconnect from the original fix above stays as defence-in-depth for sessions that go truly zombie. Diagnosis credit: RosdasHH's careful bisect.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.