Note
This is a daily beta build (2026-05-04). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Changed
- Virtual Printer Tailscale toggle no longer provisions Let's Encrypt certs — it's now informational — The original promise of the
tailscale_disabledtoggle was that flipping it on would obtain an LE cert viatailscale certso users wouldn't need to import Bambuddy's CA into the slicer. End-to-end testing exposed that this was always going to fail: BambuStudio and OrcaSlicer both refuse hostname input in the Add Printer dialog (IP-only), and — more fundamentally — their printer-MQTT trust path validates only against the bundled BBL CA store (printer.cer), not the system trust store. Confirmed against ClusterM/open-bambu-networking's clean-room reimplementation:mosquitto_tls_set(BBL_CA)+mosquitto_tls_opts_set(verify_peer=1)+mosquitto_tls_insecure_set(true)— chain validation against BBL CA only, hostname check intentionally skipped (because Bambu's printer cert CN is the device serial, not an IP/hostname). LE-issued certs don't chain to BBL CA, so the slicer rejects with the well-known "-1" before any hostname/IP logic runs. The cert-import step is unavoidable; the LE provisioning was dead code for slicer connections. What stays: the toggle, the/virtual-printers/tailscale-statusroute, the docker socket mount, and the host-level Tailscale information surfaced on the VP card (IP + MagicDNS hostname + copy button) so users know what to paste into the slicer when they pick the Tailscale interface from the bind_ip dropdown. Tailscale's role is now strictly network reach — private WireGuard tunnel to the VP from any tailnet device, no port forwarding — exactly the same trust burden as LAN. What goes:provision_cert/ensure_cert/cert_needs_renewaland the daily renewal task / restart-on-renewal plumbing on the manager (_cert_renewal_task,_cert_restart_task,_cert_renewal_loop,_restart_for_cert_renewal,_cancel_renewal_task,_cancel_restart_task); thetailscale_fqdnfield surfaced via VP status (cert side-effect); thetailscale_not_available409 guard on toggle-enable in bothroutes/virtual_printers.pyandroutes/settings.py(toggle is informational, daemon presence doesn't block flipping it);CertificateService.{ts_cert_path, ts_key_path, use_tailscale_cert}and the LE cert files on disk (virtual_printer_ts.{crt,key}left in place per-VP — harmless residue, can be deleted manually). Thetailscale_disabledDB column is kept as the persisted toggle state. Tailscale FQDN/IP on the VP card is now sourced from the existing/tailscale/statusendpoint (host-level) rather than from per-VP cert provisioning side-effect — the data is the same regardless of which VP you're looking at, since each host has one Tailscale identity. Wiki, README, and i18n copy updated across all 8 locales to drop the "no cert import needed" framing — toggle's helper text now says it surfaces the Tailscale address and that CA import is unchanged. Tests:test_tailscale.pyreduced to the survivingget_statuscases (binary missing, command fails, success, empty DNSName, malformed JSON);test_virtual_printer.py::test_sync_from_db_restarts_on_tailscale_disabled_changerewritten astest_sync_from_db_does_not_restart_on_tailscale_toggle(toggle is informational —remove_instancemust NOT be called when onlytailscale_disabledchanges);test_virtual_printer_api.py::TestVirtualPrinterTailscaleGuardAPIcollapsed to a singleTestVirtualPrinterTailscaleToggleAPI::test_toggle_does_not_consult_tailscale_daemonthat asserts both directions succeed andget_statusis never called. FrontendVirtualPrinterCard.test.tsxmock now stubsgetTailscaleStatusand the FQDN-copy block drives the FQDN through that query rather than VP status.
Added
- Virtual Printer non-proxy modes now mirror the live target printer to the slicer (#1193 follow-up) — Until now, Immediate / Review / Print Queue VPs looked like a stub Bambu Lab printer to the slicer: AMS dropdowns were empty, no live state, no camera, no per-filament k-profile lookup. The user could send a sliced file and that was it. With this change, the VP fans out the target printer's live MQTT state to the slicer (AMS units, FTS / dual-extruder routing, nozzle, temps, k-profiles, AMS load / dry / calibration commands) and proxies the camera RTSPS stream on port 322 — so the slicer treats the VP as a fully-functional Bambu printer while Bambuddy's queue / archive / dispatch features stay in the loop. Architecture (cached-as-base, single source of truth): the bridge caches the latest real
push_statusandinfo.get_versionresponse from Bambuddy's existing per-printer MQTT subscription (no second session on the printer — firmware in-flight budget unaffected, see #1164). The VP's_send_status_reportreturns a near-byte-identical copy of the real push with only the upload-state-machine fields (sequence_id, command, msg, gcode_state, gcode_file, prepare_percent, subtask_name) overridden under our control, so BambuStudio's Send pre-flight sees exactly the same shape as a direct-to-printer connection. Command responses (extrusion_cali_get, AMS write acks, xcam responses) are fanned out raw — they carry sequence_ids the slicer is waiting on. Slicer-issued commands forward to the real printer exceptprint.project_file/gcode_file, which are still answered locally because the file lives on Bambuddy. Field-shape gotchas worth remembering: (1) Real Bambu printers wire-format push_status JSON withindent=4(32 254 bytes for an idle H2D push, vs 14 268 bytes compact) — BambuStudio's Send pre-flight rejects compact JSON silently, so_publish_to_reportwas switched tojson.dumps(payload, indent=4). (2)net.info[*].ip(little-endian uint32, e.g. 192.168.255.133 → 2248124608) is the FTP destination IP BambuStudio uses for "Send to Printer storage" — it overrides anything else, including the URL hosts the rest of MQTT advertises. The bridge rewrites this to the VP's bind IP on cache, otherwise the slicer FTPs straight to the real printer and bypasses Bambuddy entirely (symptom: "Failed to send" with zero inbound FTP connections on the VP — debug-by-tcpdump if anyone hits it again). (3)upgrade_state.snand any other nested-dictsnmatching the target serial are rewritten to the VP serial; AMS-hardware serials (n3f/0.snetc.) are left alone — those identify physical AMS units, not the device. (4)ipcam.rtsp_urlis left unchanged: BambuStudio overrides the URL host with the device IP it bound on (the VP), so the slicer hits the VP's :322 RTSPS port — not the printer's directly. (5) For the slicer's RTSPS to reach the printer, the VP gets a rawTCPProxyon<bind_ip>:322 → <printer_ip>:322(same approach proxy mode uses;cap_net_bind_servicewas already in the systemd unit for FTP :990). (6)extrusion_cali_getis forwarded — answering it locally hides the user's stored k-profiles. Setup nuance for camera: because the slicer authenticates against the printer's RTSPS with whatever access code is in its profile, the VP's access code must match the target printer's access code for the camera path to authenticate. This is a one-time configuration step (Settings → Virtual Printer → set access code = target printer's LAN code, then re-add the VP in Bambu Studio / Orca Slicer). MQTT and FTP work either way; only camera needs the match because RTSPS auth happens between the slicer and the real printer's broker. Tested e2e with both BambuStudio and OrcaSlicer against H2D (dual-nozzle, AMS 2 Pro + AMS HT) and X1C (single-nozzle, AMS) across all three non-proxy modes (Immediate / Review / Print Queue) — sync, send, k-profile lookup, AMS configuration from slicer, and live camera all work. Files: newbackend/app/services/virtual_printer/mqtt_bridge.py(caches push_status / get_version, forwards slicer commands, fans out command responses, rewrites identity fields includingnet.info[*].ipLE uint32);bambu_mqtt.pygainsregister_raw_message_handler/unregister_raw_message_handler/publish_rawso the bridge can subscribe to Bambuddy's existing per-printer paho subscription without opening a second session;mqtt_server.pyswitches_send_status_reportand_send_version_responseto cached-as-base when the bridge has data, falls back to the original synthetic stubs otherwise;manager.pywires the bridge + a rawTCPProxyfor RTSPS intostart_serverfor non-proxy modes whenever a target printer is configured. 25 new tests intest_vp_mqtt_bridge.pypin the contract: lifecycle, push_status caching, serial / IP rewriting, get_version-modules cache, selective fan-out (only command responses, never push_status itself), wire format must useindent=4, routing of slicer-issued commands (project_file / gcode_file local; everything else forwarded), and the IP-encoding helper against captures from real H2D pushes. Proxy mode is untouched —SlicerProxyManagerstill owns its own MQTT/FTP/RTSP/Bind/Aux proxies in proxy mode and never instantiatesSimpleMQTTServerorMQTTBridge. - AMS slot Load / Unload from the printer card (#891, reported by NNeerr00, +1 from cadtoolbox) — The MQTT primitives for "load filament from a tray" and "unload the currently loaded tray" already existed in
bambu_mqtt.py(reverse-engineered from BambuStudio captures, including the H2D dual-extruder right-external case captured fresh during this work) but were unused — there was no HTTP route and no UI. Net effect: every Load / Unload had to happen on the printer touchscreen, and external-spool users on dual-nozzle H2D had no way to drive Ext-R from the desktop at all. Backend: newPOST /printers/{id}/ams/load?tray_id={int}andPOST /printers/{id}/ams/unload, both gated onPermission.PRINTERS_CONTROL. The load route validatestray_id ∈ {0..15, 254, 255}(AMS slots, single-external/Ext-L, Ext-R respectively) and returns a human-readable target in the success message ("AMS 0 slot 1", "external spool", "Ext-R") so the UI toast tells the user which spool the printer is now feeding from. MQTT primitive update:ams_load_filamentgains a third encoding branch fortray_id=255matching the BambuStudio capture verbatim —ams_id=255, slot_id=0(the right-extruder index, not a slot index — Bambu's load command on dual-extruder externals encodes the destination extruder, not the source slot),target=255, andcurr_temp = tar_temp = right-nozzle temp(read fromstate.temperatures["nozzle_2"], falling back to 215 °C if the right nozzle is cold or unknown — the printer rejects nonsensical temps, so a warm fallback is safer than-1). The existingtray_id=254branch is preserved verbatim (slot_id=254, curr/tar=-1) since that came from a single-extruder capture and is known to work; no risk of regression on existing single-external setups. UI: the existing AMS slot popover (the one with "Re-read RFID") gains two new entries — "Load" (poststray_id = ams.id * 4 + slotIdx) and "Unload" (no params, global on the currently-loaded slot). The external spool slot — which had no popover at all before — gets one with the same Load + Unload entries, and on dual-nozzle H2D each external slot (Ext-L tray_id=254, Ext-R tray_id=255) drives its own extruder. The menu is hidden whilestate === 'RUNNING'(parallels the existing RFID re-read gating). i18n:printers.ams.load,printers.ams.unload, plus four new toast strings (loadInitiated,unloadInitiated,failedToLoad,failedToUnload) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 16 new tests pin the contract: 5 unit tests intest_bambu_mqtt.py::TestAmsLoadFilamentEncoding(AMS slot encoding, Ext-L preserves legacy capture, Ext-R uses the new captured shape with actual right-nozzle temp, Ext-R falls back to 215 °C when cold, disconnected client doesn't publish); 11 integration tests intest_printers_api.py::TestAMSLoadUnloadAPI(load: invalid tray_id 400, not-found 404, not-connected 400, AMS slot success with derivedams_id*4+slotmath, Ext-L success, Ext-R success, MQTT failure 500; unload: not-found, not-connected, success, MQTT failure 500); 4 frontend tests inPrintersPageAmsLoadUnload.test.tsx(Load posts the right tray_id, Unload posts with no params, menu hidden while RUNNING, external spool's tray_id=254 round-trips through the route). - API keys can read Bambu Cloud presets on the owner's behalf (#1182, reported by turulix) — Tim is building a fully automated headless slicing pipeline against Bambuddy's API and hit the wall flagged in the previous round of cloud-auth work (#665):
/cloud/*routes resolvecloud_tokenper-user fromUser.cloud_token, but the auth gate (require_permission_if_auth_enabled,auth.py:856) returnedNonefor API-keyed requests, so the route fell back to the globalSettings-table token, which only carries a value in auth-disabled deployments. Net effect on auth-enabled deployments: API keys reached the gate just fine, then/cloud/filamentsalways sawuser=None, calledget_stored_token(db, None)against an empty Settings table, and returned 401 / empty results — no path to read the slicer presets, filament catalogue, or device list that a CLI workflow needs. The data model treated API keys as standalone tokens with no owner (APIKeyhadid,name,key_hash, scope flags, andprinter_ids— nouser_id), so even if the gate wanted to delegate the cloud lookup, there was no User to delegate to. The fix: make API keys carry an owner, route /cloud/* lookups through that owner, and gate the new capability behind an explicit opt-in scope so existing automation doesn't gain cloud-read access on upgrade. Concretely: (1)APIKeygainsuser_id(FK tousers.id, ON DELETE CASCADE — Postgres enforces, SQLite plus an explicitDELETE FROM api_keys WHERE user_id = ?in the user-delete route since SQLite ships FK enforcement off; the project's existing pattern atusers.py:397-406forcreated_by_idcleanup) andcan_access_cloud(BOOLEAN DEFAULT 0 — opt-in, never set on legacy rows). (2) The auth gate now returns the owner User when it validates an API key withuser_idset, so/cloud/*routes naturally resolveuser.cloud_tokenthe same way they do for JWT-authed sessions. Permission semantics are preserved — API keys still bypass the per-route permission check (their scopes live on the row itself), the User return is only so cloud-aware routes can read per-user state. Legacy ownerless keys (user_id IS NULL) keep returning None, stay anonymous, and continue working against every non-cloud route exactly as before. (3) A router-level dependency on the/cloud/*APIRouterenforces three independent fences for API-keyed callers:user_id IS NOT NULL(legacy keys → 401 with "recreate it from Settings → API Keys" — explicit recreate path rather than silently degrading),can_access_cloud=True(otherwise 403 with "Enable 'Allow cloud access' on the key"), andbuild_authenticated_cloudreturning a service (otherwise 401 with the existing token-not-set error — unchanged for JWT flow). The router-level dep duplicates the API-key validation done by the regular auth gate (router-level deps run before route-level deps in FastAPI, sorequest.stateisn't populated yet) — the cost is one extraSELECT FROM api_keysper cloud request, bounded and cheap with thekey_prefixindex. (4) The create route stampsuser_id = current_user.idfrom the creator and rejectscan_access_cloud=Truewhen auth is disabled (no per-usercloud_tokenstorage exists in that mode — fail loudly at create time rather than silently producing a non-functional key). PATCH route rejects flippingcan_access_cloudto True on a legacy ownerless key for the same reason — force recreate. (5)APIKeyResponseexposesuser_idso the UI can show ownership at a glance: a "Cloud" badge for cloud-enabled keys and a "Legacy" badge with hover tooltip ("Created before per-user ownership; recreate to use cloud access") for ownerless rows. The form gains an "Allow cloud access" checkbox, default off. Migration: two idempotentALTER TABLE api_keys ADD COLUMN(user_id INTEGER REFERENCES users(id) ON DELETE CASCADEandcan_access_cloud BOOLEAN DEFAULT 0) plus an index onuser_idfor the auth-gate's owner→keys lookup that runs on every API-keyed request. i18n: 5 new keys (settings.cloudAccess,settings.cloudAccessDescription,settings.cloudBadge,settings.legacyKey,settings.legacyKeyTooltip) added to all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copies pending native translation (matches the project's existing flow for newly-added user-facing features). 9 backend integration tests intest_api_key_cloud_access.py: create stamps owner + cloud flag, defaults off when not asked for, rejected when auth disabled (no per-user storage), PATCH rejected on legacy keys; cloud router rejects legacy keys with the recreate copy, rejects owned-but-no-cloud-flag keys with the enable-cloud-access copy, lets owned-and-flagged keys through with owner'scloud_tokenin the response, JWT callers unaffected (gate is no-op for non-API-keyed); user-delete CASCADEs the API keys via the explicit DELETE in the route. 2 frontend SettingsPage tests pin the badge rendering matrix (Cloud badge present oncan_access_cloud=true, Legacy badge present onuser_id=null, neither rendered on a normal owned non-cloud key) and the create-form contract (toggling "Allow cloud access" results incan_access_cloud=truein the POST body). Permission semantics for the new fence are the only behavioural change for existing API keys: keys created before this release become "legacy" rows and are rejected at /cloud/* with the recreate message; every other endpoint they were used against — queue, status, control — is untouched. - Home Assistant addon detection — Settings → Updates and the in-app update banner now defer to the HA Supervisor (#1167, reported by Spegeli) — Bambuddy already shipped
HA_URL/HA_TOKENenv-var support specifically labelled "for HA Add-on deployments" (#283) and a community-maintained HA addon (hobbypunk90/homeassistant-addon-bambuddy) exists upstream, so an HA-supervised installation is a real first-class deployment shape. Until now though, the update UI didn't know about it: HA addon users got the same "Update available!" banner as everyone else and, if they clicked through to Settings, saw the docker-compose snippet ("docker compose pull && docker compose up -d") which they cannot run from inside an HA addon container — that's the Supervisor's job. Detection uses the canonical signal: HA Supervisor injectsSUPERVISOR_TOKENinto every addon container, and that variable is not set in any other environment. A new_is_ha_addon()helper inbackend/app/api/routes/updates.pyflips a request-level boolean which/updates/checksurfaces asis_ha_addon: bool+ an extendedupdate_method: 'git' | 'docker' | 'ha_addon'enum. The check is checked before Docker on/updates/applybecause HA addons are Docker containers — checking docker first would mis-classify them and serve the wrong message; the response also keepsis_docker: truealongsideis_ha_addon: trueso older frontend bundles still hit a managed-deployment branch (degrading to the Docker UX) instead of rendering an in-app Install button that can't work. Frontend branches identically:SettingsPage.tsx's update card checksis_ha_addonfirst and renders "Updates are managed by the Home Assistant Supervisor. Open Settings → Add-ons → Bambuddy in Home Assistant to install the new version." in place of the docker-compose hint;Layout.tsx's update banner is suppressed entirely for HA addons since the HA Supervisor's own update notification already surfaces the new version natively in the HA UI and a duplicate Bambuddy banner would just be noise that links to a page that says "go to HA". Plain Docker deployments are unaffected — the existing docker-compose hint and the in-app banner still render the same way they did. Localised across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW) with full translations of the newsettings.updateViaHomeAssistantstring. 6 new tests pin the contract: 3 backend unit tests for_is_ha_addon()(env var present → true, absent → false, empty string treated as unset to guard against shells that export it empty), 1 backend integration test for the HA-precedes-Docker rejection on/updates/apply(asserts the message says "Home Assistant" and not "Docker Compose"), 2 backend integration tests for/updates/checkcovering the HA-addon branch (update_method == "ha_addon", both flags true) and the plain-Docker branch (is_ha_addon: false,update_method == "docker"); 2 frontend SettingsPage tests pin the mutually-exclusive UI rendering (HA branch shows the HA copy and not the docker-compose snippet; Docker branch shows the snippet and not the HA copy, neither shows the Install button); 2 frontend Layout tests pin the banner suppression for HA and its retention for plain Docker. - OIDC auto-created users now get readable usernames and land in a configurable group (#1173) — Two improvements to the OIDC auto-create flow: (1) Username derivation: Bambuddy now derives the username from
preferred_username, thenname, before falling back to the opaqueprovider_sub[:30]. Each candidate is sanitized independently — alphanumeric plus./-/_, whitespace collapsed, deduplication suffix appended on collision — so a value that strips to empty (e.g."!!!") correctly falls through to the next option rather than silently producing"oidcuser". (2) Default group: each OIDC provider gains adefault_group_idfield. When set, auto-created users are placed in that group; when unset, the existing "Viewers" fallback is preserved, so behaviour is unchanged for existing deployments. The column is nullable withON DELETE SET NULL; SQLite does not enforce FK constraints here, so a deleted configured group falls through to Viewers at runtime.default_group_idis validated on create/update (422 on a non-existent group). Exposed in the OIDC settings form as a group dropdown. Limitation: to clear a configured default group, delete the group or select a different one — explicit reset-to-null is not currently supported. - Filament Track Switch (FTS) support — print modal filament dropdown is no longer empty when an X2D / H2D has the FTS accessory installed (#1162, reported by mkavalecz) — When the FTS accessory is installed the printer's MQTT changes one nibble of the per-AMS
infobitmask: bits 8-11 flip from a fixed extruder ID (0x0 / 0x1) to0xE("uninitialized"), because the AMS is no longer wired to a single nozzle — the FTS dynamically routes any slot to either extruder. Bambuddy's MQTT parser already skipped 0xE entries when buildingams_extruder_map(matching BambuStudio's reading for boot-time transient state), so with the FTS installed the map ended up empty and the print modal's filament dropdown — which filters byextruderId === nozzle_idto prevent cross-nozzle assignment ("position of left hotend is abnormal" failures) — filtered out every loaded slot. Net effect: empty Filament Mapping dropdown on every dual-nozzle print with the FTS, even when the AMS was fully loaded with the right material. Detection comes from a new MQTT field —print.device.fila_switch— which is non-null only when the accessory is installed; it carries the routing topology as two arrays:in[track] = currently fed slot (-1 = empty)andout[track] = extruder this track terminates at. The fix surfaces this through a newFilaSwitchStatedataclass onPrinterState(installed,in_slots,out_extruders,stat,info) and the equivalentFilaSwitchResponsePydantic schema on theGET /printers/{id}/statusroute. Frontend (useFilamentMapping.ts+FilamentMapping.tsx) skips the per-extruder filter whenprinterStatus.fila_switch?.installed === trueso any compatible AMS slot can satisfy any nozzle's filament requirement, since the FTS handles the routing. Slots currently fed into a track also get a routing badge in the dropdown —[L]or[R]— so the user can tell at a glance which slot the FTS is currently routing where (idle slots get no badge: they can be routed to either extruder on demand). The hard "no cross-nozzle assignment" filter on real dual-nozzle printers without the FTS stays untouched (still trips the same way it always has —fila_switch == nullkeeps the existing behaviour). 4 backend tests intest_bambu_mqtt.py::TestFilamentTrackSwitchDetection(default-not-installed, detect-from-MQTT-using-the-reporter's-bundle, no-fila_switch-field-stays-not-installed, missing-in-out-arrays-don't-crash) and 2 frontend tests inuseFilamentMapping.test.ts(FTS-active drops the nozzle filter; explicitfila_switch: nullkeeps the filter applied). Upstream fila_switch payloads with anything other than the documented shape are tolerated —installedflips on the presence of the field, the routing arrays default to empty lists if missing, and the dropdown skips the badge for slots not currently inin_slots.
Fixed
- MakerWorld P2S 3MFs failed to slice with "Param values in 3mf/config error: -1 not in range" (#1201, reported by inorichi) — Slicing any MakerWorld model sliced for the P2S (e.g.
https://makerworld.com/en/models/1958872) bombed withSlicer process failed (exit code 238)and stderr listingraft_first_layer_expansion: -1 not in range [0.0, 3.4e+38]andtree_support_wall_count: -1 not in range [0.0, 2.0]. Root cause: BambuStudio writes"-1"intoMetadata/project_settings.configfor fields the user wants inherited from the parent process preset — the GUI handles this internally, but the headless CLI (orca-slicer-api / bambu-studio-api sidecar) runsStaticPrintConfig's range validator against the embedded settings before the--load-settingsoverrides apply, so the sentinel"-1"trips the field's lower-bound check and the CLI exits non-zero before our profile triplet is ever consulted. Theslice_with_profilespath failed; the fallback toslice_without_profiles(which uses embedded settings only) also failed because it reads the sameproject_settings.configand the same validator runs there too. Earlier in the codebase there's a_strip_3mf_embedded_settingsfunction that tried to dodge this by removing the entireproject_settings.config(plusmodel_settings.config,slice_info.config,cut_information.xml); that experiment was reverted because the strip brokeStaticPrintConfiginitialisation — silent exit-0, noresult.json, no stderr, masked by the fallback retry which then produced wrong-printer output without telling anyone (the cautionary comment inlibrary.py:_run_slicer_with_fallbackrecords the lesson). Fix is surgical: new_sanitize_project_settings_sentinels(zip_bytes)opens the embedded config, removes only allowlisted keys when their value is exactly"-1", and re-zips. Allowlist (_PROJECT_SETTINGS_SENTINEL_KEYS) starts with the two from this report (raft_first_layer_expansion,tree_support_wall_count) plusprime_tower_brim_width(a known sentinel cited in the strip-experiment comment block from earlier reports). Other fields — including non-allowlisted keys that happen to hold"-1"(e.g.z_offsetset to-1deliberately by a user) — are left untouched, so a blanket "-1 strip" can't silently corrupt legitimate negative values. The sanitiser runs before both the profile-driven path and the embedded-settings fallback, since both fail on the same input. Defensive fallbacks: returns the original bytes unchanged when the input isn't a valid zip, doesn't containproject_settings.config, has no allowlisted sentinels present, the JSON is malformed, or the config root isn't a dict — so the caller can pass the result on without further checks. Geometry, thumbnails, color, multi-part data, and every other zip entry round-trip byte-identical (the previous full-strip experiment's failure mode can't reoccur). 13 new unit tests intest_project_settings_sentinel_sanitiser.pypin the contract: each allowlisted key removed when value is"-1"(parametrised across the allowlist); multiple sentinels removed at once; allowlisted key with legitimate non-sentinel value ("0") preserved; non-allowlisted key holding"-1"(z_offset) preserved; identity return when nothing needs sanitising; array-form values (per-filament/per-extruder lists) left alone (v1 handles scalar strings only, expand later if needed); other zip entries (model_settings.config, slice_info.config, _rels metadata, geometry) all preserved with byte-identical content; non-zip input passes through; missingproject_settings.configpasses through; malformed JSON passes through; non-dict JSON root passes through. Adding new sentinel keys: if a future report surfaces another field name in the slicer's<field>: -1 not in range [...]error, add the field to_PROJECT_SETTINGS_SENTINEL_KEYS— the rest of the code stays unchanged. - Archive created with wrong plate metadata when consecutive plates of the same model are printed back-to-back (#1204, reported by BurntOutHylian) — Print Plate 2 of any multi-plate project, let it complete, then immediately print Plate 1: the resulting archive was named "MyModel - Plate 2" with Plate 2's filament slots and slicer estimate, even though Plate 1 was the print actually running. Root cause was an MQTT lag in the
print_startdata: the trigger fires on agcode_filechange (bambu_mqtt.py:2781-2786— the field carrying/data/Metadata/plate_N.gcode, which is plate-specific and always fresh), butsubtask_name(model-level, e.g. "MyModel - Plate 2") can still echo the previous job in the same MQTT batch. The FTP candidate list inmain.py:1974is built fromsubtask_namefirst, so the previous Plate 2 upload — still resident on the printer's FTP from the just-completed print — got picked up and fed into archive creation. The 3MF parser then read_plate_index=2from the wrong file'sslice_info.configand locked Plate 2's name + estimate + per-slot filament data into the row at creation, with no follow-up to correct. Reporter BurntOutHylian's diagnosis nailed it: the parser already extracts_plate_indexfrom inside the 3MF (archive.py:154), andparse_plate_id()(printer_manager.py:678) already extracts the plate fromgcode_file— those two values just weren't being compared. Fix: new helperspeek_plate_index_in_3mf()(cheap zip read ofMetadata/slice_info.configonly, returning the plate index) andswap_plate_suffix()(rewrites trailing " - Plate N" or "_plate_N" — both forms appear in real subtask_names, seetest_print_start_expected_promotion) inarchive.py. After a successful FTP download in_handle_print_start, the new validation block inmain.pypeeks the downloaded 3MF's plate index, compares againstparse_plate_id(filename), and on mismatch retries the FTP fetch with a correctedsubtask_name. If the retry finds a 3MF whose plate matches, the wrong file is dropped and the corrected one is used — archive name + estimate + slots all reflect the actual plate. If the retry can't find a matching file (or no swap is possible becausesubtask_namehad no plate suffix to swap), the wrong 3MF is dropped and the existing no-3MF fallback (main.py:2155) creates an archive without metadata; the stalesubtask_nameis overridden to the corrected one (or cleared sofilenamewins) so the fallback'sprint_nameat least reflects the right plate rather than locking in a misleading name. The validation only fires whenparse_plate_id(filename)returns a value, so single-plate / non-Bambu / cloud-named jobs are unaffected. Defence in depth: the cache eviction is implicit —temp_path.unlink()makes the wrong-file cache entry self-clean on next access via the existingget_cached_3mfevict-on-miss path (bambu_ftp.py:660-664); no separate cache invalidation needed. 17 new unit tests intest_archive_plate_validation.pypin the helpers:peek_plate_index_in_3mfreturns the index for a valid 3MF, None for missing slice_info, None for missing index metadata, None for non-zip files, None for missing files, None for non-integer index values;swap_plate_suffixhandles the spaced "Plate N" form (capitalised + lowercase + tight-hyphen), the underscored "_plate_N" form (theBox3.0_(2)_plate_5case from the existing fixture), case-insensitive matching, returns None for names without a recognised suffix, returns None for None input, and preserves separator casing so the corrected name matches what BambuStudio actually uploaded. - SpoolBuddy kiosk screen never blanked while a load cell was producing noisy readings (reported during user testing) — A noisy HX711 / load-cell mount that bounced the reported weight by ≥50 g around its midpoint kept the kiosk display permanently lit. The wake gate in
spoolbuddy/daemon/main.py:scale_poll_loop(WAKE_THRESHOLD = 50) checked the absolute change againstlast_wake_gramsand, on every trip, advancedlast_wake_gramsto the new noisy reading — so the next bounce back also exceeded the threshold, fireddisplay.wake()again, and the screen never stayed off long enough for swayidle'swlopm --off HDMI-A-1to mean anything. Symptom in the field: ~3–30 s betweenWake signal sent via FIFOlog lines, exactly correlated with the bigger noise spikes, screen flicker-blanking and immediately turning back on. Diagnosis from a real device'sjournalctl -u spoolbuddy.service:scale/readingPOSTs every ~1 s (REPORT_THRESHOLD=2 g, so the load cell was reporting ≥2 g changes constantly) interleaved with periodic wake signals. Fix: the wake gate now requires the scale'sstableflag (True only when consecutive readings agree within 2 g over a 1 s window — already produced byScaleReader.read()and previously only forwarded as telemetry to the backend). Unstable noise can no longer fire wake AND can no longer poisonlast_wake_grams, since the threshold check + the assignment are both gated onstable. Real spool placements / removals produce a settled post-event reading and continue to wake the screen as intended. 3 new regression tests inspoolbuddy/tests/test_main.py::TestScalePollLoopWakeGating: noisy ±60 g unstable readings never wake (the original bug); a settled >50 g jump wakes; a noise burst between two settled readings doesn't poisonlast_wake_grams(asserts the second stable wake still fires from the original baseline rather than the noisy peak). - Print-complete notification reported the slicer's pre-print estimate instead of the actual elapsed time (#1198, reported by BurntOutHylian) —
_background_notificationsinmain.py:3434builtarchive_datafor the completion notification withprint_time_seconds(the slicer's estimate parsed from the 3MF at archive creation), andnotification_service.py:909-910then formatted that field straight into the{{duration}}template variable. Net effect: a print cancelled 2 minutes into a 3-hour estimate told the user "duration: 3h" — wrong by orders of magnitude for any cancellation, abort, slow first layer, or any print whose actual elapsed diverged from the slicer's guess. The companion fieldactual_filament_gramswas already scaled by progress for partial prints (line 3445), so filament was right while time was wrong. Theprint_startnotification uses a separate{{estimated_time}}variable (line 838), so{{duration}}semantically should always have meant "actual elapsed" — it was just being read from the wrong source. Two-part fix: (1)main.py:3434now computesactual_time_seconds = int((archive.completed_at - archive.started_at).total_seconds())from the persisted timestamps when both are present and the elapsed is positive, and adds it as a new key inarchive_data;notification_service.py:909-916prefersactual_time_secondsand falls back toprint_time_secondsonly when timestamps weren't recorded (so the notification still has something if the elapsed can't be derived). (2)main.py:3172adds"cancelled"to the set of statuses that getcompleted_atset whenupdate_archive_statusruns — pre-fix onlycompleted,failed,abortedgot a timestamp, butcancelled(Bambuddy queue UI cancellation, distinct from touchscreen-aborts which already setcompleted_at) was deliberately excluded for reasons that no longer hold. Audited everycompleted_atconsumer in backend (archives.py:80, 333-337, 768-770, 723-731, 1722-1813,main.py:3229,projects.py:1475, 1489) and frontend (PrintersPage.tsx:2854,QueuePage.tsx:1053,StatsPage.tsx:902); none rely oncompleted_at IS NULLto mean "this is a cancelled print" — the three explicit-status filters already restrict tostatus == "completed"and the rest arecompleted_at or created_atfallback expressions that gracefully accept either. Knock-on benefit: the statistics-totals aggregation atarchives.py:723-731(which currently adds the full slicer estimate to the total whencompleted_at IS NULL) now adds the actual elapsed for cancelled prints too — a 2-minute cancellation contributes 2 minutes instead of 3 hours. Existing cancelled rows in the DB stay withcompleted_at=NULL; only new cancellations going forward get the timestamp. 3 new regression tests intest_notification_service.py::TestNotificationVariableFallbackspin the contract:{{duration}}reflectsactual_time_secondswhen present (2m elapsed wins over 3h estimate), falls back toprint_time_secondswhen actual is missing (1h estimate still surfaced rather than "Unknown"), and surfaces "Unknown" when both are absent. - Frontend served behind a path-prefixed reverse proxy (e.g.
/bambuddy/on Traefik / nginx / Cloudflare Tunnel) loaded a blank page (#1195, reported by Spegeli, follow-up to #1167) — Vite's defaultbase: '/'emits absolute asset URLs in the builtindex.html(/assets/index-*.js,/assets/index-*.css,/manifest.json,/img/...,/sw-register.js), which assumes the SPA is always served at the host root. Behind any path-prefixed reverse proxy — Traefik with a path prefix, nginxlocation /bambuddy/, Cloudflare Tunnel with path routing, Synology / Unraid reverse-proxy panels — the browser then requests those absolute paths from the host root, the proxy doesn't see them, and the upstream serves either a 404 or HTML for an unknown path withContent-Type: text/plain/text/html; the browser logsRefused to apply style from '.../assets/index-*.css' because its MIME type is 'text/plain'and renders a blank white page. Two-line fix:frontend/vite.config.tssetsbase: ''so Vite's HTML transform rewrites every absolute asset reference to relative (./assets/...,./manifest.json,./img/...,./sw-register.js) — these resolve correctly against whatever subpath the document was served from.frontend/public/sw-register.jsis a public-dir file Vite copies as-is, so itsnavigator.serviceWorker.register('/sw.js')call is changed toregister('sw.js')(relative); the SW scope is automatically pinned to whatever subpath the document loaded from, which is exactly what every reverse-proxy-at-subpath user wants. Net effect: anhttps://example.com/bambuddy/deployment now loads correctly without any frontend rebuild on the user's side. Out of scope for this change: runtime API base detection —API_BASE = '/api/v1'infrontend/src/api/client.tsis still absolute, so API calls still go to the host root. This is intentional. The fix above closes the immediate "blank page" report; making the API base, React Router basename, PWA manifest scope, and service-worker scope all subpath-aware would mean rewriting how the SPA bootstraps and would touch PWA-install state, push-notification subscriptions, and deep-link reload semantics. The supported way to embed Bambuddy in Home Assistant remains the Webpage panel +TRUSTED_FRAME_ORIGINSpath documented in the wiki — Bambuddy reachable on a stable URL (HTTP for HTTP-only HA, HTTPS via your own reverse proxy for HTTPS HA / Nabu Casa / custom-domain), iframe-embedded via the HA dashboard. HA Ingress / addon-based subpath embedding (which would require the runtime path detection above) is not supported by core. Documented explicitly indocker.mdso users hit the right pattern first. - iframe embedding from trusted origins (e.g. Home Assistant Webpage panel) no longer blocked (#1191, reported by azurusnova) — Bambuddy ships strict anti-clickjacking headers (
X-Frame-Options: SAMEORIGINand CSPframe-ancestors 'none') by default, which protects internet-exposed deployments from being embedded by hostile sites. But it also broke a documented integration path: Home Assistant's Webpage dashboard panel embeds Bambuddy via<iframe>on a different origin (HA on:8123, Bambuddy on:8000), and the SAMEORIGIN value is port-strict, so even same-LAN trusted setups got "refused to connect". A newTRUSTED_FRAME_ORIGINSenv var takes a comma-separated list ofscheme://host[:port]origins; when set, the middleware dropsX-Frame-Options(modern browsers honorframe-ancestors, and the legacyALLOW-FROM <url>syntax is deprecated and inconsistent across vendors) and the CSPframe-ancestorsdirective becomes'self' <origin> <origin>.... The default — empty env var — keeps the strict'none'behavior, so Docker / bare-metal users without HA see no behavioural change. Origin validation happens at startup: onlyhttp://andhttps://are accepted, paths/query/fragments/wildcards are rejected with a warning (one bad entry doesn't take the deployment down — it's just dropped from the allowlist). Thegcode-viewerroute'sframe-ancestors 'self'(same-origin embed for the in-app gcode preview iframe) also includes the allowlist when configured, so HA users embedding Bambuddy can still open the gcode viewer modal. 16 new tests intest_security_headers.py: 12 unit tests for the env-var parser (empty / unset / single / multiple / whitespace / empty-segment / non-http scheme dropped / missing host dropped / path dropped / query+fragment dropped / wildcard dropped / trailing-slash kept) and 4 integration tests for the middleware (default-strict emits SAMEORIGIN + 'none', allowlist relaxes CSP and drops X-Frame-Options, /docs branch also honors the allowlist, other security headers like X-Content-Type-Options and Referrer-Policy are unaffected in both modes). Documented in the Docker env-var reference page on the wiki and in.env.example. - Virtual Printer queue mode auto-dispatched onto the wrong colour when multiple compatible printers were available (#1188, reported by EdwardChamberlain) — Sending a sliced 3MF to a queue-mode VP via Orca / Studio with auto-dispatch on caused Bambuddy to schedule the job onto a printer of the right model but the wrong loaded filament: a print sliced for matte white PLA would land on a printer with no white loaded, and the printer would start the job using whatever was the closest available match. Edward's diagnosis was exact (
virtual_printer/manager.py:325-326): the manual /api/v1/print-queue/ POST flow extracts the 3MF's per-slot filament requirements at queue-add time and writesrequired_filament_types,filament_overrides, andams_mappingon the resultingPrintQueueItem, so the scheduler's color-match enforcement (print_scheduler.py:512— keys onfilament_overrides[].force_color_match === true) actually runs. The VP queue-write path (_add_to_print_queue) skipped all of that and built a barePrintQueueItemwith onlyprinter_id,target_model,archive_id,plate_id,position,status,manual_start. Net effect: the scheduler reached the model-only-matching fallback and accepted the first available printer of the target model regardless of loaded colour, exactly as he described. Fix: the scheduler's existing_get_filament_requirements3MF parser is extracted into a shared helper (backend/app/services/filament_requirements.py:extract_filament_requirements) so the VP path can reuse it at upload time. The VP's_add_to_print_queuenow calls that helper after archiving and populatesrequired_filament_typesunconditionally (cheap; helps the scheduler reject obvious type mismatches even withoutforce_color_match); and writesfilament_overrideswithforce_color_match: trueper consumed slot when a new per-VP settingqueue_force_color_matchis on. Default is off to preserve current behaviour for upgraders — a fresh-install user who wants the bug-free behaviour flips the toggle once on the VP card; an existing user gets exactly the model-only-matching they had before until they opt in. Auto-dispatch onto the wrong material happens loudly enough that anyone affected can find the toggle. Why default-off rather than default-on: existing automation that relies on "send to queue VP, get printed somewhere" without caring about colour shouldn't silently start blocking on colour matching after an upgrade. The toggle has clear UI copy (virtualPrinter.queueForceColorMatch) explaining the trade-off. Defence in depth: a malformed or unparseable 3MF (e.g. fake bytes from a misconfigured upload tool) leaves both fields None and the scheduler falls back to model-only matching, matching pre-fix behaviour for the unhappy path. The scheduler itself is unchanged — it already handledforce_color_matchcorrectly when the field was populated; the bug was purely the VP path not populating it. Schema: one nullable columnvirtual_printers.queue_force_color_match BOOLEAN DEFAULT 0/FALSE(Postgres-safe) added via the existing_safe_executemigration pattern. API:VirtualPrinterCreateandVirtualPrinterUpdatePydantic schemas +_vp_to_dictresponse shape carryqueue_force_color_match, the create + update routes wire it through to the model, andVirtualPrinterInstanceconstructor +multiVirtualPrinterApiTypeScript client mirror the field. UI: new toggle onVirtualPrinterCardrendered only whenmode === 'print_queue'(parallels the existingauto_dispatchtoggle's mode-gating), withpendingActionstate for the in-flight indicator. i18n: newvirtualPrinter.queueForceColorMatch.{title,description}keys in all 8 locales — English fully translated, German fully translated, the other 6 locales seeded with English copy pending native translation (matches the project's existing flow for newly-added user-facing features). 11 new tests: 8 intest_filament_requirements.pycovering the extracted parser end-to-end (per-slot dicts, zero-use slots filtered, plate filtering, no-plate flat-walk fallback, unparseable / missing / config-less files, sorted output); 3 intest_virtual_printer.py::TestVirtualPrinterInstancecovering the VP write path (setting-off → onlyrequired_filament_typespopulated; setting-on →filament_overridespopulated withforce_color_match: trueper slot; unparseable 3MF → both fields None, no crash). Existing scheduler tests still pass against the refactored helper (verified end-to-end across the scheduler / virtual_printer / print_queue / filament test suites — 479 tests). Edward's "out of scope nice-to-have" suggestion of a "Requires Color Match" pill on queue cards is deferred to a follow-up so this PR stays scoped to his repro. - Slicing a library file via API key fails with "no Bambu Cloud session is stored" even when the key has cloud access (#1182 follow-up, reported by turulix) — Tim shipped the headless slicing pipeline #1182 was filed for, then hit a second wall:
GET /api/v1/cloud/settingsreturned the cloud preset IDs correctly (the/cloud/*router-level gate from #1182 was doing its job), butPOST /api/v1/library/files/{id}/slicewith those IDs in the request body failed the slice job witherror_status: 400, error_detail: "Cloud preset selected for printer, but no Bambu Cloud session is stored. Sign in to Bambu Cloud and retry."Cause: the/cloud/*fix routes the API key's owner User throughcloud_caller(a router-level gate stashes the owner onrequest.state.api_key_owner, route-level deps pull it back out), but the slice route lives on/library/*— different router, no gate, so when the auth dep returnedNonefor the API-keyed request the slice route passedcurrent_user_id=Nonestraight through to_run_slicer_with_fallback→_resolve_cloud(db, user=None)→get_stored_token(db, None), which falls back to the auth-disabled globalSettingstable. That table is empty in auth-enabled deployments, so cloud preset resolution failed even though the key's owner User had a perfectly validcloud_tokenon their User row. Fix is a new route-level depresolve_api_key_cloud_ownerincloud.pythat's permissive (returns the owner User if the key hascan_access_cloud=true, otherwise None — never raises) so it can be safely added to non-/cloud/*routes without breaking the local-presets path: a request with an API key that lacks the cloud scope still slices fine against local presets, and only fails with the existing "no Bambu Cloud session" error if it actually selects a cloud preset. Wired intoPOST /library/files/{id}/slice(Tim's blocker) andGET /slicer/presets(the SliceModal preset dropdown source — same root cause, would have hit anyone using the UI through an API-keyed reverse proxy). Both routes now resolve the cloud-token owner viacurrent_user or api_key_cloud_ownerinstead ofcurrent_user.id if current_user else None. The auth gate's None-return for API keys is unchanged — keeping that fix scoped to the routes that actually need cloud-token resolution prevents accidental scope creep into other routes that fence oncurrent_user is None. 4 new integration tests intest_api_key_cloud_access.py::TestSliceRouteCloudOwnerResolutionpin the dep contract: returns the owner for a key withcan_access_cloud=Trueand a valid owner; returns None for an owned key without the cloud scope (so cloud presets still 400 cleanly, local presets still slice); returns None for legacy ownerless keys; no-op for JWT and anonymous callers. - Project cover photo thumbnail too small to recognise the print (#1155 follow-up, reported by smandon) — The 40×40 thumbnail smandon's MakerWorld download workflow relied on for "is this the model I'm looking for?" wasn't readable at that size; he asked for either a larger thumbnail or a click-to-enlarge full preview. Enlarging the thumbnail itself would shift the card layout and cost the dense grid he chose to use for browsing many projects, so the fix keeps the 40×40 thumbnail and shows a portal-mounted 384×384 popover on hover. The popover renders the full image in
object-containso tall portrait MakerWorld photos aren't cropped to a square, haspointer-events-noneso it can't intercept hover and create a flicker loop, andz-[100]so it stacks above every sibling card in the grid. Why a portal: ProjectCard carriesoverflow-hidden(for its rounded-corner clipping and the color accent bar), so an in-tree popover gets clipped by the card the moment it extends past the card's bounds — exactly the cut-off behaviour smandon reported on the second iteration. Rendering viacreatePortal(..., document.body)escapes every ancestor clipping context, andposition: fixedwith measurements fromgetBoundingClientRect()keeps the popover pinned next to the thumbnail regardless of where the card sits in the grid. Edge handling: if the thumbnail is near the viewport's right edge the popover flips to the LEFT side of the thumbnail; vertical position is clamped so the popover never overflows the window top or bottom. The thumbnail's ownonClickisstopPropagation'd so hovering the popover area never accidentally triggers the parent card's "open project" navigation. 2 new tests inProjectsPage.test.tsxpin the contract: hovering mounts the popover at document.body level (not nested in the card — a future refactor that drops the portal would re-introduce the clipping bug, and the test catches that); leaving unmounts it; the popover img points at the same cover-image URL as the small thumbnail withobject-contain; cards without a cover_image_filename never mount the portal-rendering component (so a hover doesn't flash an empty preview). - Spool edit form lost the Extra Colours value on reopen, Dual Color rendered identically to Gradient, and the Sparkle / checkerboard visuals were too subtle (#1154 follow-up, reported by maugsburger) — Four issues against the multi-colour swatch work that landed for #1154. (1) Extra Colours input didn't hydrate on edit reopen:
ColorSection's draft buffer was seeded once viauseState(formData.extra_colors), butSpoolFormModalopens before its ownuseEffectpopulatesformDatafrom the spool record — so by the time the saved value landed, the input's local state had already been initialised to''and never re-synced. The COLOR preview banner above the input rendered correctly (consumes formData directly), making it obvious the data WAS persisted; only the input was stuck blank, which the user then had to retype to save anything else. Fix: a ref-guardeduseEffectresyncsextraColorsDraftwhenformData.extra_colorschanges via an external update (e.g. modal opening with a spool); the ref is updated insidecommitExtraColorsso the user's own typing is round-tripped without the resync clobbering it. (2)Dual ColorandGradientproduced the same diagonal blend:buildColorLayerinfilamentSwatchHelpers.tsran the samelinear-gradient(135deg, ...)for both effect types, so a "Dual Color" spool was visually indistinguishable from a "Gradient" one. Real dual-colour spools have two distinct bars on the reel — that's the whole point of the variant. Fix: wheneffect_typeisdual-colorortri-color, build the colour layer aslinear-gradient(to right, c1 0% X%, c2 X% Y%, ...)with CSS double-position stops (so the colour change is a hard line rather than a blend region) and equal-width segments across the stops;gradientkeeps the original 135° smooth blend. The existingmulticolorconic-gradient path is untouched. (3) Sparkle effect was almost invisible on card-sized swatches: the original 4-dot pattern (each ~1px) read fine on the small inline swatch but disappeared on the 60-pixel-tall inventory card banners — exactly where the user actually identifies a spool. Bumped to 13 flecks in mixed sizes (1px / 1.5px / 2px) and varying opacity (0.65 → 1.0) to give a depth-of-field "metal flake" feeling, distinct from solid + multi-colour. (4) Checkerboard cell density scaled with the swatch: the previous helper putrepeating-conic-gradient(...)in thebackground-imageand the caller appliedbackground-size: cover, so the same 4-cell pattern was either tiny squares on a small swatch or four huge squares on a card-sized banner. MadebuildFilamentBackground()return{ backgroundImage, backgroundSize }with per-layer sizes — painted layers staycover, the checkerboard gets a fixed 12px tile so the cell density stays consistent regardless of element size and clearly reads as a transparency indicator rather than a multi-colour stripe. Updated the three existing call sites (InventoryPagegroup banner + spool card,ColorSectionpreview) to spread the returned style object directly. 8 new frontend tests cover the four fixes: hard-split contract for Dual/Tri Color (3 tests + 1 regression guard that Dual ≠ Gradient for the same stops); Sparkle prominence (≥ 10 distinct radial-gradient layers in the rendered background); checkerboard density (lastbackgroundSizelayer is a fixed pixel value, notcover); 4 hydration tests pinning the input restore path (fills when formData arrives via parent update, resyncs when the spool changes mid-form, doesn't clobber live user typing, clears when the new spool has no extra_colors). - Pending review card and the resulting archive name disagreed;
.gcode.3mffilename suffix wasn't fully stripped (#1152 follow-up, reported by smandon) — Two distinct holes in the original #1152 fix surfaced when smandon retested on the daily build. (1) Suffix stripping was incomplete: Bambu Studio's "Send to printer" dialog typically writes files likePlate_1.gcode.3mf(a sliced gcode payload wrapped in a 3MF container), but the archive's display stem was computed viaPath(name).stem, which only drops the last suffix and left the user staring atPlate_1.gcodein the archive UI. (2) The review card and the archive disagreed on what the print was called: the pending-uploads panel always rendered the raw FTP filename, while the eventualPrintArchive.print_nameresolved from the 3MF's embedded title (or, with the toggle onfilename, the filename stem). Net effect: the user sawPlate_1.gcodein the review card andSome Creator's Titlein the archive grid for the same item, with no toggle that flipped both views in lockstep. Fix has three pieces: a newresolve_display_stem()helper inarchive.pythat strips.gcode.3mf/.3mf/.gcode(case-insensitive) so both the archive and the review-side normalisation produce the same canonical stem; a newPendingUpload.metadata_print_namecolumn populated at FTP-receive time by peeking at the 3MF's embedded title (so/pending-uploads/list calls don't have to reopen every 3MF on every render); and a newPendingUploadResponse.display_namecomputed field that mirrorsarchive_print's exact precedence —filenametoggle: stripped stem;metadatatoggle (default): cached title or stripped stem. Frontend'sPendingUploadsPanelreadsupload.display_name(withupload.filenameas a defensive fallback for any pre-migration row), and the raw filename is exposed as a tooltip so users can still inspect what actually arrived over FTP. Migration is one idempotentALTER TABLE pending_uploads ADD COLUMN metadata_print_name VARCHAR(255)(Postgres/SQLite-safe); existing pending rows have NULL there and gracefully fall back to filename-stem behaviour. 14 unit tests pin the stripping rules (Plate_1.gcode.3mf→Plate_1, mixed case, dots in the middle, edge.3mf-only /.gcode-only, full-path inputs); 6 integration tests pin the response contract (default toggle uses metadata title when present, falls back to stripped stem when absent,filenametoggle overrides metadata,filenametoggle still strips the double suffix,GET /{id}exposes the same field, whitespace-only metadata behaves like absent); 3 frontend tests pin the review card's render path (resolved name shown, fallback to filename when display_name is empty, raw filename available via tooltip). - SpoolBuddy SSH update fails with "permission denied for user spoolbuddy" after Bambuddy keypair rotation (reported during user testing) — Bambuddy's data dir at
<DATA_DIR>/spoolbuddy/ssh/can get recreated outside the daemon's control (volume remount, container recreate, fresh deploy), at which pointget_or_create_keypair()generates a new ed25519 keypair. The SpoolBuddy daemon previously only fetched and deployed Bambuddy's public key at registration time (/devices/register), so any rotation after a successful registration left the device's~/.ssh/authorized_keyspointing at a defunct public half — every "Update" click from the Bambuddy UI then failed withConnection closed by authenticating user spoolbuddy [preauth]until the daemon was restarted manually. Worse, every prior successful registration appended a fresh entry toauthorized_keyswithout ever pruning the old one, so a typical device accumulated 5+ stale Bambuddy-tagged keys (each one a permanent backdoor for whichever Bambuddy keypair held the matching private half at the time it was deployed). Two-pronged fix: (1) the heartbeat response (HeartbeatResponse,routes/spoolbuddy.py:282) now carries the currentssh_public_keyalongside the existingpending_command/ calibration fields, so the daemon's heartbeat picks up a key rotation within one cycle instead of needing a service restart; the sametry/except Exception: passpattern as the registration response keeps a missing/unreadable backend key from breaking telemetry. (2)_deploy_ssh_key()indaemon/main.pynow syncs rather than appends — it strips every line taggedbambuddy-spoolbuddy, writes the current key once, and is a no-op when already in sync (so it doesn't churn the file every heartbeat). User-managed entries (any line not taggedbambuddy-spoolbuddy) are preserved untouched. 5 new unit tests inspoolbuddy/tests/test_deploy_ssh_key.py(creates-when-missing → mode-600 file with the current key; pile-up-of-stale-keys → only current key remains, no growth; preserves-unrelated-user-keys → user's own SSH access untouched; idempotent-when-in-sync → no mtime change so heartbeat doesn't churn the file; swallows-write-errors → readonly-fs PermissionError doesn't crash the heartbeat loop). 2 new backend integration tests intest_spoolbuddy.py::TestDeviceEndpoints—test_heartbeat_returns_ssh_public_key(response carries the key on every heartbeat) andtest_heartbeat_ssh_key_failure_does_not_break_heartbeat(backend key-read failure leavesssh_public_key: Nonebut the heartbeat still 200s). - External-camera frames returned as black on go2rtc and other MJPEG sources (#1177, reported by nkm8) —
_capture_mjpeg_framereturned the very first JPEG it found in the stream's bytes (backend/app/services/external_camera.py:282), but many MJPEG sources — go2rtc most notably, and several IP cameras — emit a "warm-up" frame on the byte that follows connection accept: usually the last keyframe held in the encoder, which is often black or stale until the encoder catches up to live content. Subsequent frames on the same connection are fine. The reporter saw it across snapshot UX, finish photos in notifications, and timelapse — every code path that opens a fresh capture connection (snapshot endpoint,[PHOTO-BG]finish photo, plate-detection CV, Obico ML inference, layer timelapse, Settings → Test). His own observation that go2rtc's/api/frame.jpeg(single-frame, internally already warmed) is never black while the first frame off/api/stream.mjpegis, matched the hypothesis exactly. Support-bundle evidence was clean: every black notification frame in his log was 11095 bytes (a pure-black 1280×720 JPEG encodes to ~10–15 KB on standard libjpeg quality settings), while every captured-after-warm-up frame from the same source was 30–45 KB. Fix: read past the first frame and return the second; if the connection closes / times out / hits the 5 MB buffer cap before a second frame ever arrives, fall back to the first so callers still get something (degrading slow / single-frame streams to None would regress every code path that relied on pre-fix behaviour). The inner-loop now drains every complete frame already in the buffer before pulling the next chunk so high-FPS sources that pack multiple frames per chunk are handled correctly. Thesnapshot/rtsp/usbcapture paths and the live-view streaming endpoint (generate_mjpeg_stream) are untouched. 7 new regression tests intest_external_camera.py::TestCaptureMjpegFrameWarmupSkipcover (a) two-frames-in-two-chunks → second returned, (b) two-frames-in-one-chunk → second returned, (c) frame split across chunk boundary → assembled correctly, (d) single-frame stream → first returned via fallback (no None regression), (e) timeout after first frame → first returned via fallback, (f) zero-frame stream → None, (g) non-200 status → None. Latency penalty: at most one frame interval (typically 50 ms – 1 s on a steady stream). Follow-up: optional snapshot URL override — nkm8 retested on the daily build and saw the warm-up skip help most of the time but the black-frame symptom still surfaced intermittently on his go2rtc setup, with the same workflow break (notification thumbnails black, snapshot UX black). His own bisect already pointed at the cleanest fix: go2rtc exposes/api/frame.jpegas a dedicated single-frame endpoint that never returns the encoder's warm-up keyframe, while/api/stream.mjpegalways does on a fresh connection. New optionalexternal_camera_snapshot_urlcolumn onprinters(idempotentALTER TABLEmigration via_safe_execute, plumbed throughPrinterBase/PrinterUpdate/PrinterResponse/from_orm_with_roi/ TypeScriptPrinter+PrinterCreate); when set, every single-frame capture path (/api/v1/printers/{id}/camera/snapshot,[SNAPSHOT]notification thumbnails,[PHOTO-BG]finish photo, layer timelapse on every captured layer, Obico ML snapshot, plate-detect / calibrate-plate CV) routes through_capture_snapshot()on the override URL via plain HTTP GET, bypassing the warm-up-frame dance entirely. The override is camera-type-agnostic — set it once on the printer config and it applies regardless of whether the live stream is mjpeg / rtsp / usb. Live-view (the/camera/streamand/cameraendpoints powering the in-app viewer) deliberately stays on the configured stream URL — the override only changes single-frame captures, since a 1 fps poll-the-snapshot-endpoint live view would be a regression for everyone who doesn't have this problem. Settings UI (Settings → General → External Cameras) renders a new "Snapshot URL (optional)" input with its own Test button below the live-stream URL row; the input is hidden whencamera_type === 'snapshot'since the live URL is already a single-frame endpoint and the override would be redundant. SSRF guard on the override is the existing_sanitize_camera_url("http", "https")allowlist — link-local / metadata / blocked hosts return None instead of being fetched. Empty-string override is treated as unset (defence in depth — a stale config row that somehow has""rather thanNULLstill routes through the live stream rather than firing GET against an empty URL). 5 new backend tests intest_external_camera.py::TestSnapshotUrlOverride(override routes to snapshot path; no override → camera-type handler; empty string → camera-type handler; SSRF guard on metadata-target override returns None; override is camera-type-agnostic across rtsp/usb). 3 new frontend tests inSettingsPage.test.tsx(input renders for mjpeg/rtsp/usb camera types; hidden for snapshot type; debounced PATCH carriesexternal_camera_snapshot_urlwhen the user types). i18n:settings.cameraSnapshotUrl{,Placeholder,Help}in en + de fully translated, the other 6 locales (fr/it/ja/pt-BR/zh-CN/zh-TW) seeded with English copies pending native translation. Documented underbambuddy-wiki/docs/features/camera.mdwith the go2rtc example URL as a tip block. - MakerWorld sidebar entry visible to every user regardless of group permissions (#1175) — Backend already enforced
makerworld:viewon every/makerworld/*route (backend/app/api/routes/makerworld.py:145, 157, 242, 406), the permission was correctly granted to the admin and standard-user role defaults (permissions.py:298, 364, 454), and the frontendPermissiontype union already included'makerworld:view' | 'makerworld:import'(client.ts:2498) — but the sidebar's hand-maintainednavPermissionsmap inLayout.tsx:278had no entry formakerworld, soisHidden('makerworld')always returned false and the entry rendered for every authenticated user. Users without the permission saw the entry, clicked, and the page rendered while every API call inside it 403'd. Two-line fix: (1)Layout.tsx:278— addmakerworld: 'makerworld:view'to the map, matching every other sidebar entry's gating shape; (2)App.tsx:200— wrap the route in<PermissionRoute permission="makerworld:view">for defence in depth, so a user who knows the URL can no longer reach the page directly (matches the existing pattern onsettings,groups/new,groups/:id/edittwo lines below). 2 new Layout tests pin the contract: with auth enabled and a user lackingmakerworld:view, the sidebar<a href="/makerworld">link is absent (other links like/filesstill render); with the permission granted, the link renders. - Printer Info modal: serial-number and IP-address copy buttons silently did nothing on plain-HTTP LAN deployments (#1174, reported by BurntOutHylian) —
PrinterInfoModal'sCopyButtononly triednavigator.clipboard.writeText(), which is gated by the secure-context requirement (HTTPS or localhost). On the typical Bambuddy deployment shape — bare-IP HTTP on the LAN —navigator.clipboardis undefined; the existingtry/catchswallowed the resultingTypeError, the icon never flipped to the tick, and nothing landed on the user's clipboard. Fixed by adding the same off-screen-textarea +document.execCommand('copy')fallback thatCameraTokensPage's plaintext-token modal already uses for plain-HTTP LAN deployments: gate onnavigator.clipboard && window.isSecureContext, fall back to the legacy path otherwise, and surface the success-tick only when the copy actually landed (return early without flippingcopiedifexecCommand('copy')returns false). Thetry/finallyaround the textarea guarantees DOM cleanup even when the browser throws on a restricted context. 3 new component tests inPrinterInfoModal.test.tsxcover (a) secure-context happy path usesnavigator.clipboard.writeText, (b) plain-HTTP fallback path actually invokesexecCommand('copy')and leaves no leaked textarea in the DOM, (c)finallycleanup removes the textarea even whenexecCommandthrows synthetically. Thanks to BurntOutHylian for the precise file/line pointer in the report. - Queue auto-dispatched the next print onto a fouled bed after an aborted or cancelled print (#1171, reported by tom5677) — When a print ended with status
aborted(printer self-abort, or a user stopping the print on the printer's own touchscreen) orcancelled(user stopping the print via the Bambuddy queue UI), the plate-clear gate added in #961 was not raised — onlycompletedandfailedtriggered it (backend/app/main.py:2660). Result: the queue scheduler dispatched the next pending item ~2 seconds after the abort, with the previous print's material still on the bed. The reporter saw two prints (P1P + P1S) auto-start onto fouled beds within seconds of each other after touchscreen-aborts, and explicitly flagged the risk of damage to the printer; a third printer (his second P1S) behaved correctly because its previous print had endedcompleted. The original code's comment ("user-cancelled prints don't require a plate-clear ack — nothing printed on the bed") only holds if you cancel right at layer 1; cancelling a 12-hour print at hour 11 leaves a fouled bed too. Fix: the gate is now raised for every terminal status —completed,failed,aborted,cancelled— matching the safety contract that the user must acknowledge the bed is clear before any next queued print starts. The gate is user-clearable on the Printers page, so worst case for a layer-1 cancel the user clicks "Clear Plate" once. Touchscreen-aborts are particularly important to gate because Bambuddy's "user stopped via UI" override (_user_stopped_printers→abortedmapped tocancelled) only fires when the user stops via the Bambuddy queue; a touchscreen-stop reportsabortedstraight through. Regression coverage intest_print_lifecycle.py::TestPlateClearGate: parametrised across all four terminal statuses (assertsset_awaiting_plate_clear(printer_id, True)is called for each), plus a defence-in-depth test that an unrecognised future status string never silently raises the gate. - Printer card always shows the first plate's thumbnail when printing a multi-plate 3MF (#1166, reported by smandon) — On printers running firmware that drops the plate path from
print.gcode_file(the reporter's case: P1S 01.10.00.00, but the same shape appears on other firmware revisions), the printer reportsgcode_file: MyModel.3mfinstead ofgcode_file: /Metadata/plate_4.gcode. The/printers/{id}/coverroute's regex (plate_(\d+)\.gcode) found nothing in the bare.3mffilename, defaulted to plate 1, and the printer card showedMetadata/plate_1.pngfrom the 3MF — even though the user dispatched plate 4. Same problem hitcurrent_plate_idon the status response (printer card detail row showed plate 1). Two-pronged fix on a precedence ladder: (1) Bambuddy now records the plate it dispatched —start_print()writes(dispatched_plate_id, dispatched_subtask)ontoPrinterStateat publish time, and a newresolve_plate_id(state)helper prefers that record over the gcode_file regex whendispatched_subtask == state.subtask_name(the subtask check rejects stale entries from a prior Bambuddy-dispatched print bleeding into a Studio-direct dispatch). (2) After the 3MF lands on disk, the cover route scans the zip for a uniqueMetadata/plate_*.gcodeentry: per-plate archives sliced separately in Bambu Studio bundle thumbnails for every plate but only the active plate's gcode, so a single match unambiguously identifies the plate even when no Bambuddy dispatch exists (Studio-direct flow). Final fallback is plate 1, unchanged. The cover-byte cache key was also simplified —plate_numwas removed from the key now that resolution is late-bound;clear_cover_cache()already runs on every print start, so different plates of the same project always re-fetch a fresh thumbnail. Coverage: 5 unit tests intest_printer_manager.py::TestResolvePlateId(dispatch precedence, stale-subtask guard, gcode regex fallback, default-1 path, missing-subtask guard), 4 unit tests intest_bambu_mqtt.py::TestStartPrintRecordsDispatchedPlate(dispatch record set/cleared/overwritten/skipped on disconnect), 2 integration tests intest_printers_api.py(dispatch wins over plate-1 default; 3MF-scan fallback for per-plate archive without dispatch). Studio-direct multi-plate prints (no dispatch record AND multiple plate gcodes in the 3MF) still default to plate 1 — matches the firmware's own ambiguity, not regressed by this change. - AMS slot configuration intermittently fails to reach the printer after several configs in a row (#1164, reported by RosdasHH) — Configuring AMS slots a handful of times (the reporter saw it almost every 6th change) would silently stop reaching the printer; ~1 minute later the filament colours on the printer would briefly jump between slots, then settle. Root cause was the zombie-session watchdog at
bambu_mqtt.py:861introduced for #887. When anams_filament_settingresponse took >10 s (normal under load — concurrent K-profile fetches, busy printer, network jitter) the watchdog incremented an_ams_cmd_unansweredcounter and zeroed_last_ams_cmd_timeso it wouldn't re-trigger on the next status push. The bug: the response handler that reset the counter was guarded byand self._last_ams_cmd_time > 0— so when the late response did arrive (after the watchdog had already zeroed the timer), the counter stayed armed at 1. The next slow response on anyams_filament_settingcommand — possibly minutes or hours later, on an entirely unrelated config attempt — would take the counter to 2 and triggerforce_reconnect_stale_session(). The user-visible symptoms match exactly: configs stop landing (because MQTT reconnects mid-publish, dropping the in-flight command and surfacing asCannot set AMS filament setting: not connectedif the user retries during the ~1 min reconnect window), then the queued state finally lands when the reconnect completes (the "filament colours jumping around" the reporter described). Fix is to drop the_last_ams_cmd_time > 0guard: anyams_filament_settingresponse — late or not — proves the channel is alive, so the counter must reset. Watchdog still trips on a real zombie session (no responses at all for two consecutive >10 s windows). Regression test intest_bambu_mqtt.py::TestZombieSessionDetection::test_late_response_after_watchdog_clears_counter_issue_1164simulates the exact sequence (watchdog fires → late response arrives → second slow response on a fresh command) and asserts the counter resets to 0 on the late response and the second command doesn't tip the threshold to 2. Other 10 zombie-detection tests still pass unchanged. Follow-up: cumulative session wedge after ~16-20 commands — the watchdog fix above heals real zombie sessions, but RosdasHH continued to see the wedge fire on healthy sessions after enough cumulative commands (configs + spool assignments share the same threshold: "8 + 3", "12 + 1", "16 + 0" all tripped it). His QoS=1 vs QoS=0 vs QoS=2 bisect was the breakthrough — the wedge only happens at QoS=1. paho-mqtt's defaultmax_inflight_messagesis 20, and Bambu's broker has racy PUBACK matching that leaves some inflight slots unreleased per session, so after ~16-20 cumulative commands the queue silently fills andpublish()returns success while packets sit in paho's internal queue (force_reconnect heals it because the inflight queue is per-session — the printer had already processed every command, it just couldn't receive any new ones until the session reset). Lifted the ceiling to 1000 viaclient.max_inflight_messages_set(1000)immediately aftermqtt.Client()construction (bambu_mqtt.py:3074-3079). Keeps QoS=1 untouched (the cross-model reliability we deliberately chose for AMS configuration — A1, P1S, X1C, H2D, P2S, X2D all need it) and removes the ceiling as the bottleneck without changing wire-protocol behaviour. The watchdog reconnect from the original fix above stays as defence-in-depth for sessions that go truly zombie. Diagnosis credit: RosdasHH's careful bisect.