github maziggy/bambuddy v0.2.4b4-daily.20260510
Daily Beta Build v0.2.4b4-daily.20260510

pre-release3 hours ago

Note

This is a daily beta build (2026-05-10). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Added

  • Build-plate icon on archive cards + uniform printer/model line (#1253, reported by tonygauderman) — Archive cards now show an OrcaSlicer-style bed icon in the printer/model row indicating which build plate the print was sliced for (Cool / Cool SuperTack / Engineering / High Temp / Textured PEI / Smooth PEI), with the full plate name in the hover tooltip. Closes the gap where users had to remember which plate matched a re-print or open the source 3MF in a slicer just to read the bed setting. Card row also unified: archives with a real Bambuddy-printer association used to render as H2D-1 GCODE … while slicer-only uploads rendered as Sliced for X1C GCODE … — same line, two different shapes. Dropped the Sliced for prefix so both render as a uniform <name-or-model> [bed-icon] GCODE <hash> row, scanning the same regardless of provenance. Backend: new bed_type column on print_archives (idempotent ALTER TABLE migration; SQLite + Postgres safe), populated from curr_bed_type in Metadata/slice_info.config (per-plate metadata, the authoritative source — that's the bed type that actually got sent to the printer for the exported plate) with a fallback to Metadata/project_settings.config's top-level curr_bed_type for older 3MF shapes. Wired through both code paths that produce archive responses: archive_to_response() (the hand-rolled dict converter at archives.py:97 — easy to miss, the schema-only change is silently dropped by Pydantic since the route bypasses from_attributes) and the /rescan endpoint, so old archives can be re-parsed by the user via the existing per-archive Rescan button. Newly-ingested archives get the value automatically. Backfill script: scripts/backfill_archive_bed_type.py (with --dry-run) re-opens every NULL archive's 3MF on disk and populates the column — opt-in for users who want their entire history covered without waiting for natural turnover. Auto-loads .env from project root before importing backend modules (since core/config.py:52 reads DATABASE_URL from os.environ at import time, not from pydantic-settings at Settings() time), prints the resolved DB URL with credentials redacted on every run so operators can confirm they're hitting the intended database (Postgres / SQLite — Bambuddy supports both per #1219's DATABASE_URL pathway), and calls init_db() itself before querying so the migration applies even if the script is run against a database the backend hasn't touched yet. Frontend: 6 OrcaSlicer-style PNGs ship in frontend/public/img/bed/ (under /img/ because that path was already statically mounted at main.py:5244; the /bed-icons/ toplevel attempted first hit the SPA catch-all and returned index.html as text/html, which the browser then rendered nothing for). New utils/bedType.ts maps slicer strings (case-insensitive) to icon + human-readable label; covers Bambu Studio and OrcaSlicer's diverging spellings for the same physical plate (e.g. Cool PlatePC Plate, Cool Plate (SuperTack)Supertack PlateBambu Cool Plate SuperTack). Renders on both card-grid view and list view in ArchivesPage.tsx. Unmapped or NULL bed_type simply omits the icon, so cards stay clean for archives created before this change. Note on icon mapping: bed_pei.png → Textured PEI, bed_pei_cool.png → Smooth PEI is a best-guess from the OrcaSlicer asset names — swap the two paths in bedType.ts if a future user reports the icons reversed for their plate.
  • Spool labels: new 40×30 mm template, hex colour code, bolder brand line (#809 follow-up, requested by oliboehm) — Three small enhancements to the spool-label printer rolled into one change. (1) New box_40x30 template — 40×30 mm single label, common DK/Brother roll size. Added to _SINGLE_LABEL_SIZES_MM in backend/app/services/label_renderer.py and to the request body's Literal[...] enum in backend/app/api/routes/labels.py; height is ≥ 20 mm so it routes through the existing roomy layout (swatch + QR + full text column). (2) Colour hex code on every label — new _hex_code_label() helper formats data.rgba as #RRGGBB (alpha-stripped, uppercased to match the inventory UI's colour-picker convention) and returns "" for missing/malformed input so the caller skips drawing instead of throwing. Rendered as a small line under the material/subtype line in the roomy layout, and as a third line above the spool ID in the tight (AMS) layout — useful when several near-identical material/colour spools sit next to each other in the AMS or on a shelf. (3) Brand line bigger + bold — the brand on every label now renders in Helvetica-Bold instead of Helvetica regular, with size bumped 5.5pt → 6.5pt on the tight layout and 7pt → 8pt on the roomy layout, so it's the most legible non-ID field at arm's length. Wiring: SpoolLabelTemplate union in frontend/src/api/client.ts extended with 'box_40x30'; LabelTemplatePickerModal gets a new TEMPLATE_OPTIONS entry for it; inventory.labels.templates.box40x30.{label,hint} keys added across all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native, with the existing per-key fallback in the modal as a safety net). The 5-template grid still wraps to 2 columns on small viewports per #1230's fix; modal regression test was widened from 4 to 5 template buttons. Tests: ALL_TEMPLATES parametrize tuple in test_label_renderer.py extended with box_40x30 so all 7 generic invariants (PDF header, empty-input, multi-colour, missing-fields, malformed-rgba, long strings, sheet pagination) cover the new template; new test_hex_color_code_rendered_when_rgba_set (asserts #F5E6D3 appears in the uncompressed PDF for both 40×30 and 62×29), test_hex_color_code_skipped_when_rgba_invalid (regex pin: no #RRGGBB shape on the label when rgba is malformed, except the spool ID's #42), and test_brand_rendered_in_bold_per_809_followup (asserts Helvetica-Bold font reference is in the PDF — caught a regression if the brand line ever reverts to regular weight). All 33 backend tests + 15 frontend modal tests pass; ruff clean.
  • Copy spool — duplicate any spool's settings into a fresh inventory row in two clicks (#1234, PR #1246 by MiguelAngelLV) — Adds a copy button (Copy icon) next to the existing edit button on every spool in the inventory page across all three views (table row, card, grouped table inner row). Clicking it opens the existing SpoolFormModal pre-filled with every field from the source spool — material, brand, color, slicer preset, label/core/cost, K-profiles, all of it — except weight_used which is reset to 0 (since the new spool starts full) and the RFID identity fields (tag_uid, tray_uuid, tag_type, data_origin) which aren't part of the form payload anyway, so the new spool is its own physical roll. Save calls api.createSpool (or api.createSpoolmanInventorySpool in Spoolman mode — both inherit the dispatch routing for free). Closes the long-running gap where users with many near-identical spools (e.g. five 1 kg PETG-CF rolls bought in a single order) had to re-enter every field from scratch on each one. Implementation shape: SpoolFormModalProps.mode: 'create' | 'edit' | 'copy' (exported as SpoolFormMode) replaces the previous isEditing = !!spool heuristic — every existing call site in InventoryPage.tsx was updated to pass the explicit mode, and the modal's title / submit-button label / weight-reset gate / submit-route branching all key on mode directly. The onCopy callback is optional on SpoolCard, SpoolTableRow, and SpoolTableGroup (matches the existing onPrintLabel? pattern), so the button is conditionally rendered and other consumers of those subcomponents don't get a copy affordance forced on them. Card-view and table-row buttons stop click propagation so clicking copy doesn't also fire the parent row's edit handler. Quick Add interaction: the Quick Add toggle is gated mode === 'create' (was !isEditing), so it stays out of copy mode — otherwise a user could enable Quick Add and bump quantity to N under the singular "Copy Spool" title and silently bulk-create N copies via bulkCreateMutation. i18n: new inventory.copySpool key across all 8 locales (en + de translated, fr/it/ja/pt-BR/zh-CN/zh-TW seeded with English fallback per project flow). Tests: 3 new in SpoolFormModal.test.tsx (SpoolFormModal copy mode describe block — title shows "Copy Spool", save calls createSpool not updateSpool, weight_used reset to 0 in the create payload when copying a spool with non-zero usage), 2 new in InventoryPageCopyButton.test.tsx (table-row copy button click → "Copy Spool" heading, cards-view copy button click → same heading after switching view modes) — guards against the three call sites drifting apart. Existing SpoolFormBulk.test.tsx and SpoolFormModal.test.tsx renders that omitted the mode prop were updated with the explicit mode="create" so the tightened Quick Add gate doesn't hide the toggle from them. Both InventoryPageCopyButton.test.tsx and InventoryPageDeepLink.test.tsx gained MSW handlers for the modal's open-time fetches (/api/v1/cloud/status, /api/v1/cloud/local-presets, /api/v1/cloud/builtin-filaments, /api/v1/inventory/color-catalog, /api/v1/inventory/spool-catalog, /api/v1/printers/) — without them MSW passes through to the real network, ECONNREFUSEs, and the rejected fetch resolves after the test environment is torn down, surfacing as a flaky "window is not defined" unhandled rejection in the modal's setLoadingCloudPresets(false) finally block (pre-existing flake hit ~1 in 3 full-suite runs at PR head).

Fixed

  • X2D / H2D dual-nozzle without AMS: filament mapping reported "Required filament type not found in printer" even when the spools were physically loaded (#1257) — Repro: X2D with 0 AMS units, two external spools (Ext-L feeding left extruder, Ext-R feeding right), print job specifies nozzle_id per filament. The Schedule Print modal showed the orange "Filament Mapping (Type not found)" header and a forced manual slot picker, even though the matching PETG was sitting right there in the external spool holder. Cause: frontend/src/hooks/useFilamentMapping.ts:18-19 derived dual-nozzle status solely from printerStatus.ams_extruder_map being non-empty. That map is populated from AMS units' info bits, so a dual-nozzle printer with zero AMS units gets an empty map → hasDualNozzle = false → external spools' extruderId falls through to undefined (line 64 ternary fallback). The downstream nozzle-aware filter at lines 117 / 377 (available.filter((f) => f.extruderId === req.nozzle_id)) then rejected every loaded filament because undefined !== 0/1 for any non-null nozzle_id. The PETG was loaded, just incorrectly stripped from the candidate set during matching. Fix: widen the dual-nozzle inference to three independent signals OR'd together: (1) nozzles[1].nozzle_diameter populated — the most direct signal, set by bambu_mqtt.py:2619-2621 only when the printer reports a right_nozzle_diameter MQTT field, so a populated value always implies real second-nozzle hardware; (2) ams_extruder_map non-empty — preserved as fallback for the dual-nozzle-with-AMS case the original code already handled; (3) vt_tray.length > 1 — single-nozzle printers (P1S / A1 / X1C) only have one external feed, so multiple external trays only exist on dual-nozzle hardware. The first signal alone is not sufficient because the backend state.nozzles defaults to a 2-entry list with empty NozzleInfo() stubs (bambu_mqtt.py:160) on every printer, single-nozzle included — nozzles.length would always be 2 on the wire and would have regressed every single-nozzle install. Affects all dual-nozzle printers running without AMS: X2D, H2D, X2 Pro. Tests: two new regressions in src/__tests__/hooks/useFilamentMapping.test.ts. matches external spools per-extruder on dual-nozzle without AMS pins the bug fix — asserts each external spool gets the correct extruderId (1 for Ext-L id=254, 0 for Ext-R id=255) and computeAmsMapping picks Ext-L for a left-nozzle requirement. does not fabricate extruderId for single-nozzle with stub nozzles[1] is the matching guard — asserts that a P1S / A1 / X1C-shape PrinterStatus (with the default-stub second nozzle entry the backend always emits) does NOT trip the dual-nozzle inference, so single-nozzle external spools keep extruderId=undefined exactly as they did pre-fix. Together they pin both directions: a future change that re-breaks the X2D path fails CI, and one that mistakenly turns single-nozzle printers into dual-nozzle also fails CI. Full frontend suite (1891 tests across 138 files) green.
  • GCode Viewer had no in-app way to navigate back — the only exit was the browser's back button — Opening the GCode Viewer from a File Manager card or an Archive card calls navigate('/gcode-viewer?archive=…' | '?library_file=…'), which mounts GCodeViewerPage as a full-height iframe inside the Layout shell. The page rendered nothing but the iframe, so once the third-party viewer's UI took over the content area there was no in-app affordance to return to the originating list — only the browser's back button. Reported by maziggy. Fix: added a thin back bar above the iframe in frontend/src/pages/GCodeViewerPage.tsx with an ArrowLeft icon button. The button label adapts to the entry point — Back to Print Archives when the URL carries ?archive=, Back to File Manager when it carries ?library_file=, generic Back otherwise (covers the rare deep-link / shared-URL case). Click prefers navigate(-1) so the user lands back in their original list with scroll position and filters preserved; falls back to /archives or /files when the page was opened in a fresh tab and there's no SPA history to return to. Iframe height is now flex: 1 inside a flex column under the bar instead of a hard-coded calc(100vh - 3.5rem) — the layout's existing fixed-header offset is unchanged, only the back bar (~36 px) is subtracted from the viewer's vertical real estate. i18n: new gcodeViewer.{back,backToArchives,backToFiles} namespace added to all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native using each locale's existing page-title vocabulary — Druckarchiv/Dateimanager, Archives d'impression/Gestionnaire de fichiers, Archivi di stampa/Gestore file, 印刷アーカイブ/ファイル管理, Arquivos de impressão/Gerenciador de arquivos, 打印归档/文件管理器, 列印歸檔/檔案管理器).
  • Archives card's "Reprint" / "Schedule" / "Slice" button labels truncated to "Re..." / "Sc..." on narrow browser windows (#1249) — The action row on each archive card has six buttons: two labelled (Reprint + Schedule, or Slice when the file isn't sliced yet) plus four icon-only utilities (open in slicer, external link, globe, download, trash). The labelled buttons used flex-1 to share whatever space remained after the four fixed-width icon buttons, with the label rendered as <span className="hidden sm:inline truncate">...</span> — i.e. visible at any viewport ≥ 640px, with truncate ellipsizing when there isn't room. The Tailwind viewport breakpoint can't see the card width. The page's grid grows column count alongside viewport (md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4), so cards stay roughly 320–380 px wide across breakpoints and the leftover ~30 px in each labelled button isn't enough for "Reprint", which lands on screen as "Re..." — repro'd from a small browser window in the reporter's case. Fix: breakpoint bumped from hidden sm:inlinehidden xl:inline on all three labelled buttons (Reprint at line 1106, Schedule at line 1117, Slice at line 1153 of frontend/src/pages/ArchivesPage.tsx). Labels now appear only at viewport ≥ 1280px where the cards (3-4 columns of ~320 px) actually have headroom for them; on narrow windows the buttons render icon-only with their existing title= tooltip kept intact for hover and assistive-tech disclosure. Trade-off accepted: a wide-viewport-with-wide-sidebar setup that compresses the card to under ~320px will still see the truncation, but that's a corner case — the common "small browser window" path is fixed without restructuring the row.
  • Spool form's "Slicer Preset" dropdown silently dropped Local Profiles when Bambu Cloud was connected, and collapsed per-printer/per-nozzle variants of cloud and local presets into a single entry (#1248, reported by andretietz) — Two distinct defects in the same code path. Defect 1 (the reported bug): buildFilamentOptions in frontend/src/components/spool-form/utils.ts was precedence-based — if (cloudPresets.length > 0) returned the cloud list and never reached the local-presets branch, so any Local Profile imported via Profiles → Local Profiles was silently invisible whenever the user was logged into Bambu Cloud (the same profile rendered fine with a green Local badge in the AMS Slot configuration modal). The wiki documents the dropdown as "merged and deduplicated" across cloud + local + built-in. Defect 2 (surfaced during fix verification): the spool form was collapsing all Bambu Lab P1S 0.4 nozzle / Bambu Lab X1C 0.4 nozzle / Bambu Lab A1 0.4 nozzle variants of "Bambu PLA Basic" into a single dropdown entry by stripping the printer suffix and dedup'ing by base name (one Map.set per family for cloud defaults, one per family for local presets). The AMS Slot modal lists each variant individually and filters by the active printer model, so the user observed strictly more entries in the AMS Slot than in the Add Spool modal even after the merge fix. The right semantic for the spool form — printer-agnostic by design, since a spool isn't bound to a printer — is to show every variant as its own row, exactly as if you'd summed the AMS Slot's per-printer-filtered output across all printers. Fix: rewrote buildFilamentOptions to (a) actually merge all three sources, dropping the precedence early-return, and (b) push each cloud setting_id and each LocalPreset row as its own FilamentOption instead of collapsing by name.replace(/.*$/, ''). displayName now keeps the full printer 0.4 nozzle suffix so users can pick the right variant. Built-in dedup against cloud setting_id is preserved (mirrors ConfigureAmsSlotModal.tsx:498 exactly). Wired api.getBuiltinFilaments() into both callers — SpoolFormModal and SpoolBuddyWriteTagPage. Persistence safety: the saved slicer_filament shape is unchanged — cloud picks still persist their setting_id, local picks still persist preset.filament_type || String(preset.id) (consumed by backend/app/utils/filament_ids.py::normalize_slicer_filament which expects GFL05/GFSL05 shapes; persisting the bare LocalPreset row id would break slicing). Local-preset allCodes now carries both the filament_type form and the String(preset.id) form so findPresetOption resolves both old (pre-fix) and new picks. React-key collision: with collapse removed, two LocalPreset rows can share the same code if they share filament_type; the dropdown key in FilamentSection.tsx is now composed ${option.code}::${option.name} to stay unique. Tests: new frontend/src/__tests__/components/spool-form/buildFilamentOptions.test.ts with 9 cases — the #1248 regression case, "one entry per cloud setting_id, no printer collapse", "list each local preset individually", "printer suffix preserved in displayName", local allCodes carrying both shapes, the GFA00GFSA00 built-in dedup, the all-empty fallback, and the alphabetical sort. The two existing vi.mock('../../api/client') blocks in SpoolFormModal.test.tsx and SpoolFormBulk.test.tsx were updated with the new getBuiltinFilaments stub.
  • SpoolBuddy install.sh re-run failed with Permission denied on root-owned files in update modedownload_spoolbuddy() ran git fetch + git checkout + git reset --hard before the post-install chown at the end of the function. If a previous install left stray root-owned files in the tree (e.g. static/assets/* written by an earlier sudo run, or a frontend build that wrote as root), the git reset --hard step aborted with EACCES on the unlink/replace step before reaching the chown. The script then exited and the kiosk's underlying ownership problem persisted, so the next attempt would fail the same way. Fix: pre-emptively chown -R spoolbuddy:spoolbuddy "$INSTALL_PATH" in the update branch before any git operation runs. The script already runs as root (enforced by check_root), so the chown is always safe. The existing post-install chown at the end stays — it now mostly catches new files created during this run that need their ownership normalised. Same root cause showed up on the kiosk's runtime SSH update path (Bambuddy → kiosk: git checkout dev && git reset --hard origin/dev running as the spoolbuddy user) but that path can't chown without sudoers expansion — the install.sh fix is the immediate recovery, and re-running the install script restores a clean ownership baseline that the runtime updater can keep healthy thereafter.
  • SpoolBuddy SSH update aborted with TypeError: startswith first arg must be bytes or a tuple of bytes, not str after the host-key store succeededperform_ssh_update calls asyncssh.import_known_hosts(...) to materialise an SSHKnownHosts object for _run_ssh_command's known_hosts= keyword arg. Both call sites (the stored-key path at line 221 and the just-stored TOFU re-parse at line 272) passed f"{ip} {key}\n".encode() — i.e. bytes. asyncssh's parser does line-based string operations (line.startswith('#') with a str literal), so any bytes input crashes inside its loader with TypeError. The two try/except clauses caught only (ValueError, asyncssh.Error), missing TypeError, so the crash bubbled up and aborted the whole update right after the schema fix successfully persisted the host key. Fix: drop the .encode() at both call sites — pass the str directly. Widened both except clauses to (ValueError, TypeError, asyncssh.Error) so any future asyncssh API surprise degrades to the existing fallback (TOFU mode without host-key verification, with a logger.warning) instead of crashing the update. Existing SSH tests all mocked asyncssh.import_known_hosts itself so they never reached the parser — added test_perform_ssh_update_passes_str_not_bytes_to_import_known_hosts to capture both call sites' arguments and assert isinstance(arg, str) so re-introducing .encode() fails CI immediately.
  • SpoolBuddy SSH update crashed on Postgres with value too long for type character varying(500) when storing the device's RSA host keyspoolbuddy_devices.ssh_host_key was declared as String(500), which is fine for SQLite (ignores VARCHAR length) and for ed25519 host keys (~120 chars), but RSA host keys in OpenSSH format are typically 370 chars (2048-bit) → 544 chars (3072-bit) → ~720 chars (4096-bit). Postgres enforces the limit strictly, so any kiosk reporting an RSA-3072 or larger host key on the first SSH update aborted at the UPDATE spoolbuddy_devices SET ssh_host_key=... flush — the git fetch + pip install + systemctl restart may have run successfully but the persistence of the TOFU host key failed and the device's update_status was never written. Fix: widened ssh_host_key from String(500)Text on the model, plus an idempotent ALTER TABLE spoolbuddy_devices ALTER COLUMN ssh_host_key TYPE TEXT migration gated on not is_sqlite() (Postgres-only; SQLite is a no-op since it doesn't enforce VARCHAR length). Existing rows are preserved — TYPE TEXT is a metadata-only change on Postgres for VARCHAR(N)TEXT so it's a fast migration even on populated tables. Originally introduced in the H1 SSH-host-key TOFU security fix; the 500-char floor was a guess based on ed25519 sizes that the RSA case immediately blew past.
  • SpoolBuddy kiosk Settings → Update button returned "API keys cannot be used for administrative operations" — Same root cause as the four QuickMenu System buttons fixed in 0.2.4b3 (Restart Daemon / Restart Browser / Reboot / Shutdown), missed in that audit. The POST /spoolbuddy/devices/{id}/update route (kiosk's own Settings → Update Daemon button → SSH update on the kiosk device) was gated on Permission.SETTINGS_UPDATE, but SETTINGS_UPDATE is on the API-key deny-list (_APIKEY_DENIED_PERMISSIONS in backend/app/core/auth.py, introduced in PR #1241). Every kiosk-side request to update the daemon — regardless of the API key's scope set (Read / Print Queue / Control / Legacy) — tripped the deny-list and returned a hard 403 with that message. The 0.2.4b3 fix explicitly carved /update out with the reasoning "replaces the daemon binary, different threat surface" — but that reasoning was wrong: restart_daemon already replaces the running daemon process, so daemon-replacement is not a step up in blast radius. The SSH update is also strictly scoped to the single device the operator physically controls (git fetch + pip install + systemctl restart on that one host) — same threat profile as the system commands already running on INVENTORY_UPDATE. Fix: lower /spoolbuddy/devices/{id}/update from Permission.SETTINGS_UPDATEPermission.INVENTORY_UPDATE, matching the rest of the kiosk-scoped routes (calibration/tare, display, cancel-write, system/command, system/command-result, update-status). The main Bambuddy in-app updater at POST /api/v1/updates/apply keeps SETTINGS_UPDATE — that one operates on the Bambuddy host and is correctly fenced behind the deny-list. Tests: test_trigger_update_requires_settings_update (which pinned the broken behavior — 403 on inventory-only key) is renamed to test_trigger_update_accepts_inventory_update and now asserts the inventory-only key reaches the device-state check (409 offline) instead of 403, so a future re-tightening of the gate surfaces immediately. Class-level docstring in test_settings_api_key_scrubbing.py updated to reflect the corrected threat-model reasoning.
  • Printer file download 500'd on non-ASCII filenames; same crash latent in three sibling endpoints (#1245, reported by 1000Delta) — GET /api/v1/printers/{id}/files/download?path=... raised UnicodeEncodeError: 'latin-1' codec can't encode characters in position … for any path whose filename carried non-ASCII characters (Chinese, Japanese, Arabic, accented Latin), reproducible against P2S firmware on macOS but not target-specific. Cause: the route shoved filename straight into Content-Disposition: attachment; filename="{filename}" — Starlette/uvicorn encodes response headers as latin-1, so anything outside U+0000..U+00FF crashed at write-time. Same pattern existed in three sibling endpoints reachable with user-controlled non-ASCII input: GET /archives/{id}/qr (uses archive.print_name from 3MF metadata, often non-ASCII), GET /projects/{id}/export (uses project.name — the existing sanitiser at projects.py:1648 uses c.isalnum() which passes non-ASCII Unicode through, so the crash propagated), and _stream_pdf in labels.py (latent — current callers pass ASCII-only template names, but the same shape would crash if a future caller passed user input). Fix: new helper backend/app/utils/http.py::build_content_disposition(filename, disposition="attachment") returns an RFC 6266-compliant header with both an ASCII-stripped legacy filename="..." fallback and an RFC 5987 filename*=UTF-8''<percent-encoded> parameter — every modern browser (Chrome / Firefox / Safari / Edge) prefers the *= form when present, so the original filename round-trips intact through Save-As; the ASCII fallback covers IE10-era clients. Helper wired in at all four call sites in one PR (per project rule: no deferred follow-ups). Tests: 20 unit tests in test_http_utils.py pinning ASCII-fallback rules across plain ASCII / Chinese / Japanese / Arabic / French diacritics / .gcode.3mf double-extension / quote-injection / backslash-injection / empty-string and ___.zip edge cases, asserting the helper's output round-trips through latin-1 (the crash condition) for every test input. 6 new integration tests in test_printers_api.py::TestPrintersAPI::test_download_printer_file_non_ascii_filename parametrized over the same character classes (the original 龙泡泡石墩子_p2s_ok.gcode.3mf case from #1245 is included) — each asserts the route returns 200 with an unmangled body, the ASCII fallback in the header matches expectations, and unquote(filename*=) round-trips back to the original Unicode filename. Thanks to 1000Delta for the diagnosis and the proof-of-concept patch on printers.py — the broader audit (three sibling endpoints, helper extraction, latin-1 round-trip assertions) was done on top of that.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.