Note
This is a daily beta build (2026-05-12). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Changed
- Page headers unified across the app: consistent icon size, placement, and subtitle styling (PR #1272 by EdwardChamberlain, continuation of #1060 / #1203) — Nine pages (Archives, FileManager, Inventory, Maintenance, MakerWorld, Profiles, Projects, Settings, Stats) now share one header pattern:
w-7 h-7 bambu-green iconnext to atext-2xl font-boldtitle with atext-bambu-gray mt-1subtitle underneath, matching the look that landed earlier on Print Queue and Printers. FileManager and Projects dropped their roundedbg-bambu-green/10 rounded-xl p-2.5icon tile in favor of the plain icon to match the rest. The sidebar's "Queue" nav item is renamed to "Print Queue" (and its icon switched fromCalendartoListOrdered) to match the page header it leads to. The Stats page title is renamedDashboard → Statisticsto match the sidebar nav label that's been pointing at it (the page never was the printer dashboard — Printers is — and the mismatch confused new users; closes a small but recurring source of "where's the dashboard?" support questions). All renames flow through every locale: en/de/fr/it/ja/pt-BR/zh-CN/zh-TW updated fornav.queue,stats.title, plus a newinventory.subtitlekey ("Manage your spools" + translations) used by the inventory header. Bonus on top of the stated scope:inventory.toolbar.{filters, view, actions}were untranslated English strings in fr/it/ja/pt-BR/zh-CN/zh-TW — Edward translated them properly in the same pass.StatsPage.test.tsxupdated to assert the new "Statistics" title. Build clean, all 35 page tests still pass, i18n parity holds at 4753 leaves across all 8 locales. Maintenance page subtitle keeps its red / amber / green severity color on the "X items due · Y warnings · all up to date" line — the colors carry actual at-a-glance status information, not just visual weight. - Bambuddy now identifies honestly as itself on every outbound request to Bambu Lab / MakerWorld / Bambu Wiki — proactive alignment with Bambu Lab's 2026-05-12 statement on cloud access, which draws a clear line between modifying AGPL code (allowed) and "impersonating official clients in communication with our cloud infrastructure" (not allowed). Bambuddy was already on the right side of that line on the main authenticated cloud path (
User-Agent: Bambuddy/1.0inbambu_cloud.py:_get_headers), but three secondary call sites were sending browser User-Agents — originally added under the assumption Cloudflare's WAF would block non-browser identification. Tested on 2026-05-12 withcurl -H "User-Agent: Bambuddy/1.0"against all three:https://bambulab.com/api/sign-in/tfareturned HTTP 400 with the expected application-level{"code":5,"error":"Login failed"}JSON (no Cloudflare interstitial),https://api.bambulab.com/v1/iot-service/api/slicer/settingreturned HTTP 200 with the full 576 KB settings response,https://makerworld.com/api/v1/design-service/*returned the same response shape as a Firefox UA, andhttps://wiki.bambulab.com/*served identical HTML to a Chrome UA. The browser-impersonation was unnecessary. All four call sites now sendBambuddy/1.0 (+https://github.com/maziggy/bambuddy)consistently — the URL in parens makes the source unambiguous so Bambu can distinguish our traffic from impersonators if they ever audit it. Files:bambu_cloud.py(TOTP/TFA path no longer spoofs Chrome UA + Origin + Referer + Accept-Language headers — Origin/Referer were spoofingbambulab.comorigin, which the new comment block specifically calls out as removed),makerworld.py(Firefox UA replaced; the Referer header is kept because MakerWorld's CSRF / origin-check middleware uses it on some endpoints, which is functional, not identity-faking),firmware_check.py(Chrome UA on the public wiki scraper replaced — wiki has no special handling for our UA). Separately: the/v1/iot-service/api/slicer/settingendpoint requires aversionquery parameter in Bambu Studio's XX.YY.ZZ.WW format (the API returns HTTP 400 "field 'version' is not set" without it, and HTTP 422 "Invalid input parameters" for non-matching formats likebambuddy-1.0), but Bambu's server accepts ANY value within that format — verified the same 576 KB response withversion=99.99.99.99. The previous default"02.04.00.70"is an actual Bambu Studio release version (2.4.0.70). The default is now"1.0.0.0"(held in a new_SLICER_API_VERSIONmodule constant inbambu_cloud.pyand re-exported intoroutes/cloud.pyso the two route defaults stay in sync), which satisfies the format requirement without claiming to be a specific Bambu Studio build. Unchanged on purpose:version="2.0.0.0"parameters increate_setting/update_settingpayloads are the preset's format version (extracted fromcurrent.get("version", "2.0.0.0")for updates, line 443) — they describe the preset schema, not the client, and stay as-is. Two regression tests rewritten to lock in the new behavior:test_verify_totp_uses_honest_bambuddy_user_agent(wastest_verify_totp_includes_browser_headers— asserts UA starts withBambuddy/, assertsMozilla/Chrome/Origin/Refererare not present) andtest_sends_honest_bambuddy_user_agent(wastest_sends_browser_like_headers— same shape, plus continues to assert the deprecatedx-bbl-*Bambu-app identification headers are still gone). All 4598 backend tests pass. - Spoolman weight tracking now uses per-print grams for all spools, matching the internal Filament Inventory (#1119, reported by Moskito99) — Spoolman previously had two mutually-exclusive weight paths: AMS remain%×tray_weight auto-sync (default; only worked for Bambu Lab spools with valid RFID tray_weight) and per-print 3MF-grams tracking (only enabled when "Disable AMS Weight Sync" was toggled on). Non-BL spools without RFID fell through both paths — AMS auto-sync had no tray_weight to multiply, and the inventory_remaining fallback was wiped because activating Spoolman deletes the internal
spool_assignmenttable — so Spoolman never saw a weight update for them. The internal Filament Inventory has no such gap: it always uses per-print 3MF grams as the primary path with AMS-remain% delta as fallback, and it works for every spool type. Spoolman now does the same: per-print tracking runs whenever Spoolman is enabled and is the only writer ofremaining_weight. AMS auto-sync continues to maintain spool metadata and slot assignments but no longer touches weight (eliminating the double-count that would otherwise occur for BL spools with both paths active).store_print_data(spoolman_tracking.py:159) had itsdisable_weight_syncearly-return removed; the threesync_ams_traycallsites (main.py:1450auto-sync,spoolman.py:318per-printer manual,spoolman.py:517sync-all) now hard-codedisable_weight_sync=True. Thespoolman_disable_weight_syncsetting is now deprecated and a no-op — kept in the DB/UI for backwards compat. Behavioral consequence for existing users on the default flag (False): live AMS-based remaining_weight updates between prints stop happening; weight updates now arrive once per print completion with 3MF gram precision. Regression test intest_spoolman_tracking.py::test_stores_tracking_when_disable_weight_sync_is_falseproves the early-return is gone.
Fixed
- External-spool prints no longer credit usage to AMS slot 0's Spoolman spool (#1276, reported and diagnosed by ojimpo — regression of #853) — On a single-filament external-spool print (TPU loaded in
vir_slot id=254on the reporter's H2S + AMS 2 Pro),_resolve_global_tray_idinspoolman_tracking.pywas crediting the usage to whatever Spoolman spool happened to be linked to AMS slot 0 — a completely unrelated material in the reporter's case. ~48.94 g of TPU was credited to a PLA spool across 4 prints before they noticed. Root cause: BambuStudio encodes virtual tray IDs (254/255) as-1in the flatams_mappingarray it sends to the printer (a convention already documented inbambu_mqtt.py:start_print()), but the spoolman tracking helper was treating-1as "unmapped → use position-based default" and the default mappedslot_id=1→global_tray_id=0. Whenslot_to_tray[slot_id-1] == -1andams_trayscontains an external slot (254 or 255), the helper now returns the external tray ID directly, matching the conventionstart_print()uses on the other side of the pipeline. Prefers 254 over 255 (consistent with single-nozzletray_nowreporting and thevir_slotid=255→254 remap inbambu_mqtt.py:864). Legacy behavior preserved whenams_traysis empty or contains no external slot (callers that don't passams_trayskeep the position-based fallback). Two regression tests cover the reporter's exact scenario (ams_trays={0,1,2,3,254}, slot_to_tray=[-1]→ 254) plus the H2D-deputy case and the fall-through-when-no-external case. Root cause investigation and patch by ojimpo. - Virtual-printer queue mode now honors workflow default print options (#1235, reported by jc21, root cause and patch by jc21 in #1277) — Prints sent from Bambu Studio (or any slicer) to a VP in
print_queuemode arrived in the queue withbed_levelling,flow_cali,vibration_cali,layer_inspect, andtimelapseset to the SQLAlchemy column-level defaults, never the user's workflow preferences. The reporter happened to have every workflow default set to the opposite of the column defaults, so prints appeared to have all five options inverted; every queue item required hand-editing before dispatch. The manualPOST /print-queue/endpoint reads these fields off the request body (the frontend pulls them from settings before submitting), but the VP-FTP-receive path atbackend/app/services/virtual_printer/manager.py:_add_to_print_queueconstructedPrintQueueItemwithout touching them at all — SQLAlchemy then filled inbed_levelling=True, flow_cali=False, vibration_cali=True, layer_inspect=False, timelapse=Falseregardless of what was in the DB. Fix readsdefault_bed_levelling/default_flow_cali/default_vibration_cali/default_layer_inspect/default_timelapsevia the existingget_setting()helper (same pattern already used in the function forvirtual_printer_archive_name_source) and passes them explicitly toPrintQueueItem. A small_bool_setting()helper mapsNone → AppSettings schema default, so a fresh install with no workflow page customization behaves identically to before. Regression tests:test_add_to_print_queue_uses_workflow_defaults_from_settings(verifies all five settings flow through with values opposite to the column defaults, matching the reporter's exact scenario) andtest_add_to_print_queue_falls_back_to_schema_defaults_when_unset(verifies the no-DB-row path). - Linking a Spoolman spool to an AMS-HT slot no longer fails with a CHECK constraint error (#1274, reported by guillaume.houba) — On H2C / H2D, AMS-HT units report
ams_id128+ (one ams_id per unit, single tray). Thespoolman_slot_assignmentstable'sck_ams_id_rangeconstraint only allowed 0-7 (standard AMS) or 255 (external), so the upsert onPOST /spoolman/inventory/slot-assignmentsblew up withIntegrityError: CHECK constraint failed: ck_ams_id_rangeand the user had no way to link any spool to an AMS-HT slot. Widened the constraint formula to(ams_id >= 0 AND ams_id <= 7) OR (ams_id >= 128 AND ams_id <= 191) OR ams_id = 255— matches the value range the internalspool_assignmenttable already accepts and leaves room for up to 64 AMS-HT units (the existingbambu_mqtt/usage-tracker code uses the same 128-based addressing). Updated in the ORM model (models/spoolman_slot_assignment.py) and both the SQLite/PostgresCREATE TABLEDDL incore/database.py. New idempotent migration_migrate_widen_spoolman_slot_ams_id_range: Postgres path runsDROP CONSTRAINT IF EXISTS+ADD CONSTRAINT(no data risk — the new formula is strictly wider than the old); SQLite path detects the stale formula insqlite_master, table-rebuilds via the standard_v2rename pattern used elsewhere in this file (_migrate_update_auto_link_constraintatdatabase.py:418), and leaves pre-constraint legacy tables untouched. Tests:test_ams_id_check_admits_ams_ht_range(ORM + DDL formula) andtest_assign_accepts_ams_ht_id(end-to-endPOST /slot-assignmentswithams_id=128). - X2D live camera stream no longer cut by Obico polling / snapshot capture (#1271, reported by clabeuhtegrite) — The MJPEG fan-out broadcaster from #1089 lets multiple browser viewers share one upstream RTSP socket per printer, but internal callers (Obico AI polling at the user's configured
obico_poll_interval, and the manual/camera/snapshotendpoint) still opened their own fresh RTSP connections. X1C / H2D / P2S firmware tolerates brief concurrent camera sockets so the gap was invisible there. X2D firmware01.01.00.00(and likely future firmwares) enforces strict single-camera-connection more aggressively: every Obico poll (default every 5 s) kicked the live stream, the broadcaster paid the multi-second RTSP handshake to reconnect, and the user saw the stream cut "all the time." New helpertry_get_active_buffered_frame(printer_id)atapi/routes/camera.py:74returns the broadcaster's last buffered frame (always <1 s old while any viewer is connected) andNonewhen no viewer is active. Obico's_capture_frameand the/camera/snapshotendpoint check it first and only fall through to a fresh socket when no stream is running — preserving today's behavior when nobody is watching.plate_detectionandlayer_timelapsedeliberately not converted: plate-detection needs guaranteed-fresh frames post-print (false-positive risk if the user already grabbed the print in the same second), and layer-timelapse is for external cameras only. Regression tests:test_camera_snapshot_reuses_buffered_frame_when_stream_activeand twoTestCaptureFrameSharesBroadcasterUpstreamObico tests. - Usage tracker: spool swaps in UNUSED slots mid-print no longer charge the old spool (#1269, reported by maugsburger) — Path 2 of the usage tracker (AMS remain% delta fallback) iterated every AMS tray that had a remain% delta, even slots the print never touched. When a user swapped spools in an unrelated slot during a print, the new spool reports
remain=0(no RFID tag yet) while the snapshot from print-start was 100%, so the fallback charged the originally-assigned spool the full 1000 g. Reporter's case: single-filament print on AMS0-T3 (ams_mapping=[3]), swapped a spool in T1 and another in T2 to refill while the print continued — wound up withSpool 27 consumed 1000.0g (100%) on printer 1 AMS0-T1andSpool 24 consumed 170.0g (17%) on printer 1 AMS0-T2, neither of which were ever in the print. Fix: the fallback now buildsprint_used_keysfromsession.ams_mapping,state.tray_change_log, andsession.tray_now_at_start(the three runtime signals telling us which trays were actually part of the print), converts each global tray ID to(ams_id, tray_id)using the standard convention (254/255 → external, ≥128 → AMS-HT, otherwiseid // 4, id % 4), and skips fallback for trays whose key is not in that set. When all three signals are empty (legacy edge case: no slicer push, no MQTT tray-change events, notray_nowat start) the legacy "scan every tray" behavior is preserved so we don't regress prints with no metadata. Regression test intest_usage_tracker.py::test_skips_fallback_for_trays_outside_print_mappingreproduces the reporter's exact scenario. - Printer card: smart-plug live wattage now rounded to whole watts (#1266, reported by Carter3DP) — The printer card's smart-plug status badge rendered
plugStatus.energy.powerraw, so plugs that report fractional watts (Kauf PLF12 via ESPHome / Home Assistant in the reporter's case, but any MQTT plug pushing a float can hit this) showed values like14.123456789012W and overflowed the card width.SmartPlugCardandSwitchbarPopoveralready wrapped the same field inMath.round(); only the printer-card badge was missing the round. Single-line fix atfrontend/src/pages/PrintersPage.tsx:4569.
Added
- Build-plate icon on archive cards + uniform printer/model line (#1253, reported by tonygauderman) — Archive cards now show an OrcaSlicer-style bed icon in the printer/model row indicating which build plate the print was sliced for (Cool / Cool SuperTack / Engineering / High Temp / Textured PEI / Smooth PEI), with the full plate name in the hover tooltip. Closes the gap where users had to remember which plate matched a re-print or open the source 3MF in a slicer just to read the bed setting. Card row also unified: archives with a real Bambuddy-printer association used to render as
H2D-1 GCODE …while slicer-only uploads rendered asSliced for X1C GCODE …— same line, two different shapes. Dropped theSliced forprefix so both render as a uniform<name-or-model> [bed-icon] GCODE <hash>row, scanning the same regardless of provenance. Backend: newbed_typecolumn onprint_archives(idempotentALTER TABLEmigration; SQLite + Postgres safe), populated fromcurr_bed_typeinMetadata/slice_info.config(per-plate metadata, the authoritative source — that's the bed type that actually got sent to the printer for the exported plate) with a fallback toMetadata/project_settings.config's top-levelcurr_bed_typefor older 3MF shapes. Wired through both code paths that produce archive responses:archive_to_response()(the hand-rolled dict converter atarchives.py:97— easy to miss, the schema-only change is silently dropped by Pydantic since the route bypassesfrom_attributes) and the/rescanendpoint, so old archives can be re-parsed by the user via the existing per-archive Rescan button. Newly-ingested archives get the value automatically. Backfill script:scripts/backfill_archive_bed_type.py(with--dry-run) re-opens every NULL archive's 3MF on disk and populates the column — opt-in for users who want their entire history covered without waiting for natural turnover. Auto-loads.envfrom project root before importing backend modules (sincecore/config.py:52readsDATABASE_URLfromos.environat import time, not frompydantic-settingsatSettings()time), prints the resolved DB URL with credentials redacted on every run so operators can confirm they're hitting the intended database (Postgres / SQLite — Bambuddy supports both per #1219'sDATABASE_URLpathway), and callsinit_db()itself before querying so the migration applies even if the script is run against a database the backend hasn't touched yet. Frontend: 6 OrcaSlicer-style PNGs ship infrontend/public/img/bed/(under/img/because that path was already statically mounted atmain.py:5244; the/bed-icons/toplevel attempted first hit the SPA catch-all and returnedindex.htmlastext/html, which the browser then rendered nothing for). Newutils/bedType.tsmaps slicer strings (case-insensitive) to icon + human-readable label; covers Bambu Studio and OrcaSlicer's diverging spellings for the same physical plate (e.g.Cool Plate↔PC Plate,Cool Plate (SuperTack)↔Supertack Plate↔Bambu Cool Plate SuperTack). Renders on both card-grid view and list view inArchivesPage.tsx. Unmapped or NULLbed_typesimply omits the icon, so cards stay clean for archives created before this change. Note on icon mapping:bed_pei.png→ Textured PEI,bed_pei_cool.png→ Smooth PEI is a best-guess from the OrcaSlicer asset names — swap the two paths inbedType.tsif a future user reports the icons reversed for their plate. - Spool labels: new 40×30 mm template, hex colour code, bolder brand line (#809 follow-up, requested by oliboehm) — Three small enhancements to the spool-label printer rolled into one change. (1) New
box_40x30template — 40×30 mm single label, common DK/Brother roll size. Added to_SINGLE_LABEL_SIZES_MMinbackend/app/services/label_renderer.pyand to the request body'sLiteral[...]enum inbackend/app/api/routes/labels.py; height is ≥ 20 mm so it routes through the existing roomy layout (swatch + QR + full text column). (2) Colour hex code on every label — new_hex_code_label()helper formatsdata.rgbaas#RRGGBB(alpha-stripped, uppercased to match the inventory UI's colour-picker convention) and returns""for missing/malformed input so the caller skips drawing instead of throwing. Rendered as a small line under the material/subtype line in the roomy layout, and as a third line above the spool ID in the tight (AMS) layout — useful when several near-identical material/colour spools sit next to each other in the AMS or on a shelf. (3) Brand line bigger + bold — the brand on every label now renders inHelvetica-Boldinstead ofHelveticaregular, with size bumped 5.5pt → 6.5pt on the tight layout and 7pt → 8pt on the roomy layout, so it's the most legible non-ID field at arm's length. Wiring:SpoolLabelTemplateunion infrontend/src/api/client.tsextended with'box_40x30';LabelTemplatePickerModalgets a newTEMPLATE_OPTIONSentry for it;inventory.labels.templates.box40x30.{label,hint}keys added across all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native, with the existing per-key fallback in the modal as a safety net). The 5-template grid still wraps to 2 columns on small viewports per #1230's fix; modal regression test was widened from4to5template buttons. Tests:ALL_TEMPLATESparametrize tuple intest_label_renderer.pyextended withbox_40x30so all 7 generic invariants (PDF header, empty-input, multi-colour, missing-fields, malformed-rgba, long strings, sheet pagination) cover the new template; newtest_hex_color_code_rendered_when_rgba_set(asserts#F5E6D3appears in the uncompressed PDF for both 40×30 and 62×29),test_hex_color_code_skipped_when_rgba_invalid(regex pin: no#RRGGBBshape on the label when rgba is malformed, except the spool ID's#42), andtest_brand_rendered_in_bold_per_809_followup(assertsHelvetica-Boldfont reference is in the PDF — caught a regression if the brand line ever reverts to regular weight). All 33 backend tests + 15 frontend modal tests pass; ruff clean. - Copy spool — duplicate any spool's settings into a fresh inventory row in two clicks (#1234, PR #1246 by MiguelAngelLV) — Adds a copy button (
Copyicon) next to the existing edit button on every spool in the inventory page across all three views (table row, card, grouped table inner row). Clicking it opens the existingSpoolFormModalpre-filled with every field from the source spool — material, brand, color, slicer preset, label/core/cost, K-profiles, all of it — exceptweight_usedwhich is reset to 0 (since the new spool starts full) and the RFID identity fields (tag_uid,tray_uuid,tag_type,data_origin) which aren't part of the form payload anyway, so the new spool is its own physical roll. Save callsapi.createSpool(orapi.createSpoolmanInventorySpoolin Spoolman mode — both inherit the dispatch routing for free). Closes the long-running gap where users with many near-identical spools (e.g. five 1 kg PETG-CF rolls bought in a single order) had to re-enter every field from scratch on each one. Implementation shape:SpoolFormModalProps.mode: 'create' | 'edit' | 'copy'(exported asSpoolFormMode) replaces the previousisEditing = !!spoolheuristic — every existing call site inInventoryPage.tsxwas updated to pass the explicit mode, and the modal's title / submit-button label / weight-reset gate / submit-route branching all key onmodedirectly. TheonCopycallback is optional onSpoolCard,SpoolTableRow, andSpoolTableGroup(matches the existingonPrintLabel?pattern), so the button is conditionally rendered and other consumers of those subcomponents don't get a copy affordance forced on them. Card-view and table-row buttons stop click propagation so clicking copy doesn't also fire the parent row's edit handler. Quick Add interaction: the Quick Add toggle is gatedmode === 'create'(was!isEditing), so it stays out of copy mode — otherwise a user could enable Quick Add and bump quantity to N under the singular "Copy Spool" title and silently bulk-create N copies viabulkCreateMutation. i18n: newinventory.copySpoolkey across all 8 locales (en + de translated, fr/it/ja/pt-BR/zh-CN/zh-TW seeded with English fallback per project flow). Tests: 3 new inSpoolFormModal.test.tsx(SpoolFormModal copy modedescribe block — title shows "Copy Spool", save callscreateSpoolnotupdateSpool,weight_usedreset to 0 in the create payload when copying a spool with non-zero usage), 2 new inInventoryPageCopyButton.test.tsx(table-row copy button click → "Copy Spool" heading, cards-view copy button click → same heading after switching view modes) — guards against the three call sites drifting apart. ExistingSpoolFormBulk.test.tsxandSpoolFormModal.test.tsxrenders that omitted themodeprop were updated with the explicitmode="create"so the tightened Quick Add gate doesn't hide the toggle from them. BothInventoryPageCopyButton.test.tsxandInventoryPageDeepLink.test.tsxgained MSW handlers for the modal's open-time fetches (/api/v1/cloud/status,/api/v1/cloud/local-presets,/api/v1/cloud/builtin-filaments,/api/v1/inventory/color-catalog,/api/v1/inventory/spool-catalog,/api/v1/printers/) — without them MSW passes through to the real network, ECONNREFUSEs, and the rejected fetch resolves after the test environment is torn down, surfacing as a flaky "window is not defined" unhandled rejection in the modal'ssetLoadingCloudPresets(false)finally block (pre-existing flake hit ~1 in 3 full-suite runs at PR head).
Fixed
- Archives page didn't auto-refresh when a slicer sent a print to a Virtual Printer — the new card only appeared after switching tabs (#1282, reported by kleinwareio) — Real-printer prints broadcast
archive_createdover the WebSocket frommain.py's MQTTprint_starthandler, and the Archives page listens for that event infrontend/src/hooks/useWebSocket.ts:241to invalidate its react-query cache. The VP file-receive paths inbackend/app/services/virtual_printer/manager.py(_archive_filefor immediate mode and_add_to_print_queuefor queue mode) created the archive and committed it to the DB but never broadcast the event — so the page stayed stale until the user clicked another tab and back, which triggered a refetch on focus. Fix: factored a small_broadcast_archive_created(archive)helper ontoVirtualPrinterInstancethat importsws_managerlazily (matches the file's existing late-import convention for archive/queue imports) and emits the same{id, printer_id, filename, print_name, status}payload shapemain.pyuses. Called from both VP paths immediately after the archive is logged (_archive_file) and after the queue item is committed (_add_to_print_queue). Broadcast failures are swallowed at debug level so a transient WebSocket issue can't break the file-receive flow. The review mode path (_queue_file) is intentionally untouched — it creates aPendingUpload, not aPrintArchive, and renders on a different page. Tests:test_archive_file_broadcasts_archive_createdandtest_add_to_print_queue_broadcasts_archive_createdpatchws_manager.send_archive_createdand assert it's called once with the right payload shape. Affects: every Bambuddy install using a VP inimmediateorprint_queuemode; review mode and proxy mode are unaffected. - Virtual Printer wedged the slicer at "Downloading...(0%)" when a user clicked Print (instead of Send) against a non-proxy-mode VP, and blocked the next dispatch with "The printer is busy with another print job" (#1280, reported by kleinwareio) — Bambuddy's VP supports two distinct dispatch flows from the slicer: Send (file upload only — the path queue / immediate / review modes are designed for) and Print (file upload + start-print, intended for proxy mode where there's a real printer behind the VP). The reporter's setup was queue mode but they clicked Print, which is unsupported there. The user-facing symptom was wedging instead of a clean error: the FTP upload completed, the file landed in Bambuddy's queue, but Orca's UI froze at
Downloading...(0%)and the next attempt was blocked. Cause: the VP's simulated state machine, inbackend/app/services/virtual_printer/manager.py::on_file_received, jumpedPREPARE → IDLEdirectly after the FTP upload completed. The Send flow doesn't watch the post-upload state, so Send users never noticed. The Print flow watches the gcode_state cycle expectingPREPARE → RUNNING → FINISHand only releases its in-flight-job lock when it seesFINISH(orFAILED). GoingPREPARE → IDLElooks to the Print-flow slicer like "printer abandoned my job without confirming completion" → UI keeps the prior job pinned → next dispatch is blocked.gcode_file_prepare_percentalso stayed at"0"for the whole upload window, which is why Orca's "Downloading X%" progress bar never advanced. Fix:on_file_receivednow transitionsPREPARE → FINISHwithprepare_percent="100"and the just-completed filename. The VP's 1-Hz periodic status push (mqtt_server.py:363) broadcasts the new state to every connected slicer within a second, so Orca clears its lock and the next dispatch goes through. The transition is gated to.3mfuploads only — auxiliary uploads (printer-side.gcodeblobs etc.) leave the visible state alone. Treats Print and Send identically in non-proxy modes — Print is now silently handled as "file received, treat as completed" instead of wedging the slicer. Send remains a no-op behavior change because Send doesn't watch the post-upload state. Tests: 2 new tests inbackend/tests/unit/services/test_virtual_printer.pypin (1) the FINISH transition with the correct filename + prepare_percent="100", and (2) the non-3MF guard. Affects every VP mode that isn't proxy (immediate,print_queue,review) on every slicer using the Print flow (BambuStudio + OrcaSlicer in LAN-mode). - External-spool filament selection silently rolled back: every "Generic PLA" / preset change for the external slot looked applied in the UI but failed on the printer, and the next print threw "no mapping" (#1279, reported by kleinwareio) — Repro: P1S, no AMS, vt_tray active. User picks any filament for the external slot via Bambuddy. The UI looked normal, but the printer's MQTT response was
{"command":"ams_filament_setting", "result":"fail", "reason":"error string"}. The companionextrusion_cali_selcommand succeeded, so the K-profile stuck but the filament identity didn't — and the next print therefore had nothing to map to. Cause:backend/app/services/bambu_mqtt.py::ams_set_filament_settingencoded the single-external-spool case as{ams_id: 255, tray_id: 0, slot_id: 0}. The "LOCALtray_id = 0" comment in the code was a misread of the printer's response shape (the printer echoestray_id: 0as the slot-within-virtual-unit, not the slot index used in the request). Verification: captured BambuStudio → X1Cams_filament_settingpublish viamosquitto-compatible paho-mqtt subscriber on the same broker, BambuStudio set the external slot to a PLA preset, the published REQ was{ams_id: 255, tray_id: 254, slot_id: 0, tray_info_idx: "P4d64437", tray_color: "F72323FF", tray_type: "PLA", ...}and the printer's REP returnedresult: "success". The on-wire convention forams_filament_settingon the external spool is therefore the global tray index (tray_id: 254), not a local slot number (tray_id: 0). Fix:mqtt_tray_id = 254for the single-external branch in bothams_set_filament_settingandreset_ams_slot(which shares the convention). The dual-external branch (H2D,len(vt_tray) > 1) was not in the captured exchange and is left atmqtt_tray_id = 0until a Studio → H2D capture confirms the correct value — a regression test pins the current dual-external encoding so any future change to that branch surfaces immediately. Affected printers: every printer whose MQTT push reportsvt_trayas a single-element list — i.e. one external slot. That covers all single-nozzle Bambu printers (P1P, P1S, A1, A1 mini, X1C, X1E) plus dual-nozzle models that use a single external feed (X2D). Not affected by this change: H2D / H2C / H2S, which expose two external slots and go through a separatelen(vt_tray) > 1branch. That branch is preserved at its existingmqtt_tray_id = 0encoding because the captured exchange did not cover it; if the same misencoding turns out to affect dual-external too, a Studio → H2D capture will surface the right values and a follow-up patch will land. Known asymmetry not touched in this PR: the inlineams_filament_settingbuilt by_probe_developer_mode(bambu_mqtt.py:2971-2985) still hardcodestray_id=0. The probe is robust to this — its detection logic only matchesreason: "verify failed"so it correctly identifies dev-mode regardless of whether the command itself succeeds — but the two builders should be unified in a follow-up. Tests: 5 new tests inbackend/tests/unit/services/test_bambu_mqtt.py::TestAmsFilamentSettingExternalSpoolEncodingpin the X1C/P1S/A1 single-external fix,reset_ams_slotsymmetry, regular AMS slot encoding unchanged, AMS-HT slot encoding unchanged, and the explicitly-unverified dual-external encoding (so any future change to the dual branch surfaces in diff review). - Scan For Timelapse matched the wrong video when an older print's filename happened to land near a later archive's completion (#1278, reported by 1000Delta) — Repro: P2S in LAN-Only mode (no NTP, so printer clock is drifted +8h from UTC), two prints on the same day. Archive 1 correctly attached
video_2026-05-08_09-41-29.mp4. Archive 2 (started at 16:39:09 UTC, expectedvideo_2026-05-09_00-42-42.mp4) reused Archive 1's video with a misleadingdiff: 0:02:19. Cause:scan_timelapse's Strategy 2 matcher inbackend/app/api/routes/archives.pyhad two compounding flaws. (1) It compared the filename timestamp against botharchive.started_atandarchive.completed_atwith a 48 h tolerance — but the filename always represents the print's START time, never its end, so the end-time branch was a semantic mistake whose only effect was creating false positives. For Archive 2, the stale filename09:41:29shifted by hypothesis offset-8h→17:41:29, which happened to fall ~2 minutes before Archive 2's completion → "diff" 2m19s won. (2) The matcher tried seven hypothesised offsets[0, ±1, ±7, ±8], which densely covers a wide span of the day. Even with the end-time branch removed, the wrong video at offset-7lands at16:41:29→ 2m20s from Archive 2's start, beating the correct video's 3m33s at offset+8. Fix: extracted Strategy 2 into a pure_match_timelapse_by_timestamp(video_files, archive_start)helper that (a) only compares against print start time (end-time evidence is handled separately by Strategy 3 via file mtime, which actually does reflect when writing finished), and (b) requires the best (video, offset) pair to beat the next-best pair from a different video by at least 15 minutes. When the top two candidates from different videos are too close to call, the helper returnsNoneso the route surfaces the existingavailable_fileslist and the frontend's manual-selection dialog kicks in — which is the fallback the reporter explicitly asked for ("at a minimum, we should support that can fall back to letting the user manually select"). Wide offset support is preserved so EU / JST / AEST users (offsets +1, +7, +9, +10, etc.) still get auto-match when there's no ambiguity. Tests: 17 new tests inbackend/tests/unit/test_timelapse_match.pypin the bug case (test_issue_1278_archive2_refuses_to_auto_pick_ambiguous,test_issue_1278_archive1_still_matches_unambiguously), the resolution path once the stale video is cleaned up (test_archive2_resolves_when_stale_video_removed), each of the 7 supported offsets via parametrize, and the supporting invariants (nostarted_at→None, non-timestamp filenames are skipped, same-video different-offset is not ambiguous, well-separated different videos still auto-pick). Known UX gap not in this PR: if the matcher auto-picks a wrong match, the user must delete the attached timelapse first before re-scanning —scan_timelapseshort-circuits withstatus: "exists"whentimelapse_pathis already set. Adding a force-rescan or "wrong match, pick from candidates" affordance is a separate change. - Docker image: pip upgraded to >=26.1 to close CVE-2026-6357 (medium) — The
python:3.13-slim-trixiebase image ships pip 26.0.1, which runs its self-update check after installing wheels. A hostile wheel that included a module named like a deferred stdlib import (urllib,ssl, …) could therefore hijack imports inside the just-finished install step. The exploit path is theoretical for Bambuddy itself — we don't install user-supplied wheels at runtime — but the vulnerable pip version still ships inside the image, GitHub code-scanning flagged it (alert #778), and any downstream user whopip installs into the running container inherits the issue. Fix: Dockerfile now runspip install --upgrade 'pip>=26.1'immediately beforepip install -r requirements.txt, so the requirements install itself happens under the patched pip and the resultingpip-*.dist-info/METADATATrivy reads from the layer is the fixed version. Norequirements.txtchange — the floor is enforced at the image-build layer where the vulnerable copy lived. (libexpat1 alert #795 also flagged by code-scanning is a DoS-only XML attribute-collision CVE with no patched Debian trixie package yet — left open as a tracking signal; next base-image rebuild after trixie ships libexpat 2.8.1 will close it automatically.) - Gitea backups silently failed after the first run; Forgejo v15 token-scope quirk broke "Test Connection"; many failure paths surfaced cryptic one-word errors (#1224 reported by rtadams89, #1239 + PR #1255 by BurntOutHylian) — Two intertwined problem clusters on the Git-backup path, fixed as one PR. (1) Gitea backups quietly stopped after run #1. The Git backup service used GitHub's Git Data API (
POST /git/blobs→/trees→/commits→PATCH /refs) for every push. Gitea does not implement these write endpoints on modern versions, so every blob POST returned 404; the loop'scontinue-on-non-201 pattern left the change list empty and the route returned{"status": "skipped"}instead of committing — no toast, no log row, just "no changes" forever. The first run only worked because the empty-repo path already used the Contents API. Fix:GiteaBackend.push_filesis overridden to usePOST /repos/{owner}/{repo}/contentswith afilesarray — every changed file is sent asoperation: "update"(with its current blob SHA) oroperation: "create", the whole batch commits in a single round-trip, no partial-commit failure mode possible._create_branch_and_pushswitched from the unimplementedPOST /git/refstoPOST /brancheswith{new_branch_name, old_ref_name}. (2) Forgejo v15+ returns 404 (not 403) for private repos when the token lacks repository scope, indistinguishable on the wire from "repo not found / token typo" — Test Connection's existing 404 branch said "Repository not found", which sent users chasing the wrong cause. Fix: newForgejoBackend(inheritsGiteaBackend) overridestest_connectionto GET/userfirst; 401 = bad token, 403 = zero-scope token ("read:user scope missing"), 404 on the subsequent/repos/call surfaces the v15-specific "private repo with scope mismatch" hint instead of the generic message. Hardening pass on the broader backup stack (B18–B26 review round): everyresponse.json()[...]indexing ingithub.py(9 sites: ref/commit/blob/tree/commit/ref acrosspush_files+_create_branch_and_push+_create_initial_commit) now routes through a newbase.py::_read_sha(response, *path)helper that returns(sha, error_reason)— a malformed body no longer bubblesKeyError('object')through the catch-all to surface as the cryptic one-word string"'object'"inlast_backup_message. Tree-fetch failures (GitHub side, mirroring the Gitea side) now returnfailedwith status code + truncated body instead of lettingexisting_filessilently stay empty (which forced every file to re-upload and produced a downstream 422 with no hint at the real cause). GitHub's_create_branch_and_pushfailure message includes the HTTP status code (an empty-body 422 now produces a diagnostic message instead of"Failed to create branch: "). Both backends detecttruncated: trueon the tree-listing response (GitHub's tree API truncates at >7MB / >100k entries) and fail loudly asking the operator to rotate the backup repo — previously a truncated listing made the SHA-equality dedup miss and silently re-uploaded every file each run.test_connectionfailure messages now includestr(e)[:200]alongside the exception class name, so the UI surfaces"Connection failed: ConnectError: certificate verify failed: hostname mismatch"instead of just"ConnectError". Gitea's 409-on-/contentsmessage was softened from "stale blob SHAs" (one possible cause) to "the branch likely advanced concurrently (web-UI edit, another backup run, or path-vs-tree collision)". Every status-code branch ingithub.pyandgitea.pymid-push now emits alogger.warningwith owner/repo context (previously only the outerexceptlogged, so a 403/404/422 left a DB row with no application-log entry). Recursivepush_filesre-entry after branch create now logs"Re-entering push_files after branch create owner/repo -> branch"at info level so replication-lag second-pass failures are debuggable. Tests: +17 new unit tests intest_git_providers.pycovering the GitHub robustness paths (tree-fetch failure, truncated tree, malformed JSON for ref/commit/blob, 403/422 on_create_branch_and_push), the Gitea round-2 hardening (truncated tree, status code inget_current_commit/extract_tree_SHA/get_repo_infofailures, log marker emission), and the Forgejo connection-failure detail. Existing 86 → 103 tests, all pass; full backend suite + integration backup tests green; ruff clean. Tested by BurntOutHylian against Gitea 1.24.7 / 1.25.4 / 1.26.1 and Forgejo v11 / v15 LTS. Companion wiki update at maziggy/bambuddy-wiki#28. - Printer card's "Show on Printer Card" smart-plug button toggled power without confirmation (#1260, reported by thkl) — Smart plugs with the "Show on Printer Card" option enabled appear as a clickable chip in the printer card's HA-entities row (below the main Smart Plug controls). One click cut power to the printer instantly — including mid-print — even though the main Off button next to it already routes through a
ConfirmModaland shows an additional running-print warning. Fix: the HA-row click handler infrontend/src/pages/PrintersPage.tsxnow branches on entity type —script.*entities keep firing instantly (a script is a fire-once trigger, not a power switch, and the existing semantic of "Run" matches user expectation), but switch/light/anything-else entities now open a newConfirmModalfirst. The modal reuses the samevariant="danger"+ running-print warning shape as the existing power-off confirmation: whenstatus?.state === 'RUNNING'it shows the "WARNING: is currently printing! Toggling may cut power and interrupt the print" copy, and renders the default-variant "Toggle the Home Assistant entity ?" message otherwise. The entity name comes fromha_entity_id(withnamefallback) so the modal disambiguates which of multiple plugs the click was on. i18n: newprinters.confirm.{haToggleTitle, haToggleMessage, haToggleWarning, haToggleButton}keys added across all 8 locales (en + de + fr + it + ja + pt-BR + zh-CN + zh-TW translated to native, no English-fallback seeding). Full PrintersPage frontend suite (49 tests) still passes; build clean. - X2D / H2D dual-nozzle without AMS: filament mapping reported "Required filament type not found in printer" even when the spools were physically loaded (#1257) — Repro: X2D with 0 AMS units, two external spools (Ext-L feeding left extruder, Ext-R feeding right), print job specifies
nozzle_idper filament. The Schedule Print modal showed the orange "Filament Mapping (Type not found)" header and a forced manual slot picker, even though the matching PETG was sitting right there in the external spool holder. Cause:frontend/src/hooks/useFilamentMapping.ts:18-19derived dual-nozzle status solely fromprinterStatus.ams_extruder_mapbeing non-empty. That map is populated from AMS units' info bits, so a dual-nozzle printer with zero AMS units gets an empty map →hasDualNozzle = false→ external spools'extruderIdfalls through toundefined(line 64 ternary fallback). The downstream nozzle-aware filter at lines 117 / 377 (available.filter((f) => f.extruderId === req.nozzle_id)) then rejected every loaded filament becauseundefined !== 0/1for any non-nullnozzle_id. The PETG was loaded, just incorrectly stripped from the candidate set during matching. Fix: widen the dual-nozzle inference to three independent signals OR'd together: (1)nozzles[1].nozzle_diameterpopulated — the most direct signal, set bybambu_mqtt.py:2619-2621only when the printer reports aright_nozzle_diameterMQTT field, so a populated value always implies real second-nozzle hardware; (2)ams_extruder_mapnon-empty — preserved as fallback for the dual-nozzle-with-AMS case the original code already handled; (3)vt_tray.length > 1— single-nozzle printers (P1S / A1 / X1C) only have one external feed, so multiple external trays only exist on dual-nozzle hardware. The first signal alone is not sufficient because the backendstate.nozzlesdefaults to a 2-entry list with emptyNozzleInfo()stubs (bambu_mqtt.py:160) on every printer, single-nozzle included —nozzles.lengthwould always be 2 on the wire and would have regressed every single-nozzle install. Affects all dual-nozzle printers running without AMS: X2D, H2D, X2 Pro. Tests: two new regressions insrc/__tests__/hooks/useFilamentMapping.test.ts.matches external spools per-extruder on dual-nozzle without AMSpins the bug fix — asserts each external spool gets the correctextruderId(1 for Ext-L id=254, 0 for Ext-R id=255) andcomputeAmsMappingpicks Ext-L for a left-nozzle requirement.does not fabricate extruderId for single-nozzle with stub nozzles[1]is the matching guard — asserts that a P1S / A1 / X1C-shape PrinterStatus (with the default-stub second nozzle entry the backend always emits) does NOT trip the dual-nozzle inference, so single-nozzle external spools keepextruderId=undefinedexactly as they did pre-fix. Together they pin both directions: a future change that re-breaks the X2D path fails CI, and one that mistakenly turns single-nozzle printers into dual-nozzle also fails CI. Full frontend suite (1891 tests across 138 files) green. - GCode Viewer had no in-app way to navigate back — the only exit was the browser's back button — Opening the GCode Viewer from a File Manager card or an Archive card calls
navigate('/gcode-viewer?archive=…' | '?library_file=…'), which mountsGCodeViewerPageas a full-height iframe inside the Layout shell. The page rendered nothing but the iframe, so once the third-party viewer's UI took over the content area there was no in-app affordance to return to the originating list — only the browser's back button. Reported by maziggy. Fix: added a thin back bar above the iframe infrontend/src/pages/GCodeViewerPage.tsxwith anArrowLefticon button. The button label adapts to the entry point —Back to Print Archiveswhen the URL carries?archive=,Back to File Managerwhen it carries?library_file=, genericBackotherwise (covers the rare deep-link / shared-URL case). Click prefersnavigate(-1)so the user lands back in their original list with scroll position and filters preserved; falls back to/archivesor/fileswhen the page was opened in a fresh tab and there's no SPA history to return to. Iframe height is nowflex: 1inside a flex column under the bar instead of a hard-codedcalc(100vh - 3.5rem)— the layout's existing fixed-header offset is unchanged, only the back bar (~36 px) is subtracted from the viewer's vertical real estate. i18n: newgcodeViewer.{back,backToArchives,backToFiles}namespace added to all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native using each locale's existing page-title vocabulary —Druckarchiv/Dateimanager,Archives d'impression/Gestionnaire de fichiers,Archivi di stampa/Gestore file,印刷アーカイブ/ファイル管理,Arquivos de impressão/Gerenciador de arquivos,打印归档/文件管理器,列印歸檔/檔案管理器). - Archives card's "Reprint" / "Schedule" / "Slice" button labels truncated to "Re..." / "Sc..." on narrow browser windows (#1249) — The action row on each archive card has six buttons: two labelled (Reprint + Schedule, or Slice when the file isn't sliced yet) plus four icon-only utilities (open in slicer, external link, globe, download, trash). The labelled buttons used
flex-1to share whatever space remained after the four fixed-width icon buttons, with the label rendered as<span className="hidden sm:inline truncate">...</span>— i.e. visible at any viewport ≥ 640px, withtruncateellipsizing when there isn't room. The Tailwind viewport breakpoint can't see the card width. The page's grid grows column count alongside viewport (md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4), so cards stay roughly 320–380 px wide across breakpoints and the leftover ~30 px in each labelled button isn't enough for "Reprint", which lands on screen as "Re..." — repro'd from a small browser window in the reporter's case. Fix: breakpoint bumped fromhidden sm:inline→hidden xl:inlineon all three labelled buttons (Reprint at line 1106, Schedule at line 1117, Slice at line 1153 offrontend/src/pages/ArchivesPage.tsx). Labels now appear only at viewport ≥ 1280px where the cards (3-4 columns of ~320 px) actually have headroom for them; on narrow windows the buttons render icon-only with their existingtitle=tooltip kept intact for hover and assistive-tech disclosure. Trade-off accepted: a wide-viewport-with-wide-sidebar setup that compresses the card to under ~320px will still see the truncation, but that's a corner case — the common "small browser window" path is fixed without restructuring the row. - Spool form's "Slicer Preset" dropdown silently dropped Local Profiles when Bambu Cloud was connected, and collapsed per-printer/per-nozzle variants of cloud and local presets into a single entry (#1248, reported by andretietz) — Two distinct defects in the same code path. Defect 1 (the reported bug):
buildFilamentOptionsinfrontend/src/components/spool-form/utils.tswas precedence-based —if (cloudPresets.length > 0)returned the cloud list and never reached the local-presets branch, so any Local Profile imported via Profiles → Local Profiles was silently invisible whenever the user was logged into Bambu Cloud (the same profile rendered fine with a greenLocalbadge in the AMS Slot configuration modal). The wiki documents the dropdown as "merged and deduplicated" across cloud + local + built-in. Defect 2 (surfaced during fix verification): the spool form was collapsing allBambu Lab P1S 0.4 nozzle/Bambu Lab X1C 0.4 nozzle/Bambu Lab A1 0.4 nozzlevariants of "Bambu PLA Basic" into a single dropdown entry by stripping theprintersuffix and dedup'ing by base name (one Map.set per family for cloud defaults, one per family for local presets). The AMS Slot modal lists each variant individually and filters by the active printer model, so the user observed strictly more entries in the AMS Slot than in the Add Spool modal even after the merge fix. The right semantic for the spool form — printer-agnostic by design, since a spool isn't bound to a printer — is to show every variant as its own row, exactly as if you'd summed the AMS Slot's per-printer-filtered output across all printers. Fix: rewrotebuildFilamentOptionsto (a) actually merge all three sources, dropping the precedence early-return, and (b) push each cloudsetting_idand eachLocalPresetrow as its ownFilamentOptioninstead of collapsing byname.replace(/.*$/, '').displayNamenow keeps the fullprinter 0.4 nozzlesuffix so users can pick the right variant. Built-in dedup against cloud setting_id is preserved (mirrorsConfigureAmsSlotModal.tsx:498exactly). Wiredapi.getBuiltinFilaments()into both callers —SpoolFormModalandSpoolBuddyWriteTagPage. Persistence safety: the savedslicer_filamentshape is unchanged — cloud picks still persist theirsetting_id, local picks still persistpreset.filament_type || String(preset.id)(consumed bybackend/app/utils/filament_ids.py::normalize_slicer_filamentwhich expectsGFL05/GFSL05shapes; persisting the bare LocalPreset row id would break slicing). Local-presetallCodesnow carries both thefilament_typeform and theString(preset.id)form sofindPresetOptionresolves both old (pre-fix) and new picks. React-key collision: with collapse removed, two LocalPreset rows can share the samecodeif they sharefilament_type; the dropdown key inFilamentSection.tsxis now composed${option.code}::${option.name}to stay unique. Tests: newfrontend/src/__tests__/components/spool-form/buildFilamentOptions.test.tswith 9 cases — the #1248 regression case, "one entry per cloud setting_id, no printer collapse", "list each local preset individually", "printer suffix preserved in displayName", localallCodescarrying both shapes, theGFA00↔GFSA00built-in dedup, the all-empty fallback, and the alphabetical sort. The two existingvi.mock('../../api/client')blocks inSpoolFormModal.test.tsxandSpoolFormBulk.test.tsxwere updated with the newgetBuiltinFilamentsstub. - SpoolBuddy install.sh re-run failed with
Permission deniedon root-owned files in update mode —download_spoolbuddy()rangit fetch + git checkout + git reset --hardbefore the post-install chown at the end of the function. If a previous install left stray root-owned files in the tree (e.g.static/assets/*written by an earliersudorun, or a frontend build that wrote as root), thegit reset --hardstep aborted with EACCES on the unlink/replace step before reaching the chown. The script then exited and the kiosk's underlying ownership problem persisted, so the next attempt would fail the same way. Fix: pre-emptivelychown -R spoolbuddy:spoolbuddy "$INSTALL_PATH"in the update branch before any git operation runs. The script already runs as root (enforced bycheck_root), so the chown is always safe. The existing post-install chown at the end stays — it now mostly catches new files created during this run that need their ownership normalised. Same root cause showed up on the kiosk's runtime SSH update path (Bambuddy → kiosk:git checkout dev && git reset --hard origin/devrunning as thespoolbuddyuser) but that path can'tchownwithout sudoers expansion — the install.sh fix is the immediate recovery, and re-running the install script restores a clean ownership baseline that the runtime updater can keep healthy thereafter. - SpoolBuddy SSH update aborted with
TypeError: startswith first arg must be bytes or a tuple of bytes, not strafter the host-key store succeeded —perform_ssh_updatecallsasyncssh.import_known_hosts(...)to materialise anSSHKnownHostsobject for_run_ssh_command'sknown_hosts=keyword arg. Both call sites (the stored-key path at line 221 and the just-stored TOFU re-parse at line 272) passedf"{ip} {key}\n".encode()— i.e.bytes. asyncssh's parser does line-based string operations (line.startswith('#')with astrliteral), so anybytesinput crashes inside its loader withTypeError. The twotry/exceptclauses caught only(ValueError, asyncssh.Error), missingTypeError, so the crash bubbled up and aborted the whole update right after the schema fix successfully persisted the host key. Fix: drop the.encode()at both call sites — pass the str directly. Widened both except clauses to(ValueError, TypeError, asyncssh.Error)so any future asyncssh API surprise degrades to the existing fallback (TOFU mode without host-key verification, with a logger.warning) instead of crashing the update. Existing SSH tests all mockedasyncssh.import_known_hostsitself so they never reached the parser — addedtest_perform_ssh_update_passes_str_not_bytes_to_import_known_hoststo capture both call sites' arguments and assertisinstance(arg, str)so re-introducing.encode()fails CI immediately. - SpoolBuddy SSH update crashed on Postgres with
value too long for type character varying(500)when storing the device's RSA host key —spoolbuddy_devices.ssh_host_keywas declared asString(500), which is fine for SQLite (ignores VARCHAR length) and for ed25519 host keys (~120 chars), but RSA host keys in OpenSSH format are typically 370 chars (2048-bit) → 544 chars (3072-bit) → ~720 chars (4096-bit). Postgres enforces the limit strictly, so any kiosk reporting an RSA-3072 or larger host key on the first SSH update aborted at theUPDATE spoolbuddy_devices SET ssh_host_key=...flush — thegit fetch + pip install + systemctl restartmay have run successfully but the persistence of the TOFU host key failed and the device's update_status was never written. Fix: widenedssh_host_keyfromString(500)→Texton the model, plus an idempotentALTER TABLE spoolbuddy_devices ALTER COLUMN ssh_host_key TYPE TEXTmigration gated onnot is_sqlite()(Postgres-only; SQLite is a no-op since it doesn't enforce VARCHAR length). Existing rows are preserved —TYPE TEXTis a metadata-only change on Postgres forVARCHAR(N)→TEXTso it's a fast migration even on populated tables. Originally introduced in the H1 SSH-host-key TOFU security fix; the 500-char floor was a guess based on ed25519 sizes that the RSA case immediately blew past. - SpoolBuddy kiosk Settings → Update button returned "API keys cannot be used for administrative operations" — Same root cause as the four QuickMenu System buttons fixed in 0.2.4b3 (Restart Daemon / Restart Browser / Reboot / Shutdown), missed in that audit. The
POST /spoolbuddy/devices/{id}/updateroute (kiosk's own Settings → Update Daemon button → SSH update on the kiosk device) was gated onPermission.SETTINGS_UPDATE, butSETTINGS_UPDATEis on the API-key deny-list (_APIKEY_DENIED_PERMISSIONSinbackend/app/core/auth.py, introduced in PR #1241). Every kiosk-side request to update the daemon — regardless of the API key's scope set (Read / Print Queue / Control / Legacy) — tripped the deny-list and returned a hard 403 with that message. The 0.2.4b3 fix explicitly carved /update out with the reasoning "replaces the daemon binary, different threat surface" — but that reasoning was wrong:restart_daemonalready replaces the running daemon process, so daemon-replacement is not a step up in blast radius. The SSH update is also strictly scoped to the single device the operator physically controls (git fetch + pip install + systemctl restarton that one host) — same threat profile as the system commands already running onINVENTORY_UPDATE. Fix: lower/spoolbuddy/devices/{id}/updatefromPermission.SETTINGS_UPDATE→Permission.INVENTORY_UPDATE, matching the rest of the kiosk-scoped routes (calibration/tare,display,cancel-write,system/command,system/command-result,update-status). The main Bambuddy in-app updater atPOST /api/v1/updates/applykeepsSETTINGS_UPDATE— that one operates on the Bambuddy host and is correctly fenced behind the deny-list. Tests:test_trigger_update_requires_settings_update(which pinned the broken behavior — 403 on inventory-only key) is renamed totest_trigger_update_accepts_inventory_updateand now asserts the inventory-only key reaches the device-state check (409 offline) instead of 403, so a future re-tightening of the gate surfaces immediately. Class-level docstring intest_settings_api_key_scrubbing.pyupdated to reflect the corrected threat-model reasoning. - Printer file download 500'd on non-ASCII filenames; same crash latent in three sibling endpoints (#1245, reported by 1000Delta) —
GET /api/v1/printers/{id}/files/download?path=...raisedUnicodeEncodeError: 'latin-1' codec can't encode characters in position …for any path whose filename carried non-ASCII characters (Chinese, Japanese, Arabic, accented Latin), reproducible against P2S firmware on macOS but not target-specific. Cause: the route shovedfilenamestraight intoContent-Disposition: attachment; filename="{filename}"— Starlette/uvicorn encodes response headers as latin-1, so anything outside U+0000..U+00FF crashed at write-time. Same pattern existed in three sibling endpoints reachable with user-controlled non-ASCII input:GET /archives/{id}/qr(usesarchive.print_namefrom 3MF metadata, often non-ASCII),GET /projects/{id}/export(usesproject.name— the existing sanitiser atprojects.py:1648usesc.isalnum()which passes non-ASCII Unicode through, so the crash propagated), and_stream_pdfinlabels.py(latent — current callers pass ASCII-only template names, but the same shape would crash if a future caller passed user input). Fix: new helperbackend/app/utils/http.py::build_content_disposition(filename, disposition="attachment")returns an RFC 6266-compliant header with both an ASCII-stripped legacyfilename="..."fallback and an RFC 5987filename*=UTF-8''<percent-encoded>parameter — every modern browser (Chrome / Firefox / Safari / Edge) prefers the*=form when present, so the original filename round-trips intact through Save-As; the ASCII fallback covers IE10-era clients. Helper wired in at all four call sites in one PR (per project rule: no deferred follow-ups). Tests: 20 unit tests intest_http_utils.pypinning ASCII-fallback rules across plain ASCII / Chinese / Japanese / Arabic / French diacritics /.gcode.3mfdouble-extension / quote-injection / backslash-injection / empty-string and___.zipedge cases, asserting the helper's output round-trips through latin-1 (the crash condition) for every test input. 6 new integration tests intest_printers_api.py::TestPrintersAPI::test_download_printer_file_non_ascii_filenameparametrized over the same character classes (the original龙泡泡石墩子_p2s_ok.gcode.3mfcase from #1245 is included) — each asserts the route returns 200 with an unmangled body, the ASCII fallback in the header matches expectations, andunquote(filename*=)round-trips back to the original Unicode filename. Thanks to 1000Delta for the diagnosis and the proof-of-concept patch onprinters.py— the broader audit (three sibling endpoints, helper extraction, latin-1 round-trip assertions) was done on top of that.