Note
This is a daily beta build (2026-04-28). It contains the latest fixes and improvements but may have undiscovered issues.
Docker users: Update by pulling the new image:
docker pull ghcr.io/maziggy/bambuddy:daily
or
docker pull maziggy/bambuddy:daily
**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.
Added
- Multi-color slicing in the Slice modal, with per-plate filament discovery for unsliced project files — Initial slice support assumed a single filament profile per slice; multi-color 3MFs were silently truncated to the first slot, producing wrong colours on every non-trivial print. The Slice modal now (1) opens a plate-picker step first when the source is a multi-plate 3MF, (2) renders one filament dropdown per AMS slot the picked plate actually uses, with each dropdown auto-populated against the user's local + standard presets by
(filament_type, filament_colour)match, and (3) submits the user's picks as an orderedfilament_presets: PresetRef[]array which is forwarded as repeatedfilamentProfilemultipart parts to the slicer sidecar (the CLI joins them with;for--load-filaments). Per-plate filament list source-of-truth chain: for a sliced archive the modal readsMetadata/slice_info.configdirectly (existing path); for an unsliced project file (whereslice_info.configis empty until Bambu Studio actually slices), the newslice_previewservice runs a fast preview-slice via the sidecar'sslice_without_profiles(the project's embedded settings drive the slice; we throw away the gcode and only parse the resulting slice_info), and the result is cached by(kind, source_id, plate_id, content_hash)with LRU eviction at 256 entries — repeat opens of the same plate are instant. If the sidecar isn't reachable the modal falls back to a heuristic that readsMetadata/project_settings.configfor the AMS slot config and intersects it with the plate's painted-face data (paint_colorquadtree leaves on per-object .model files, scanned with a 5% noise threshold to drop single-leaf edit accidents). SliceModal-only tier priority is nowlocal → cloud → standard(wascloud → local → standard): imported profiles win because they carry parsed type/colour metadata in the response, while cloud entries don't (the per-preset detail endpoint rate-limits at ~10/sec per token and 50+ parallel fetches returned 429 on every request). The unified-listing endpoint's dedup pass now backfills metadata cross-tier — if a cloud entry wins dedup over a same-named local entry, the cloud entry inherits the local'sfilament_type/filament_colourso the Slice modal's metadata-aware pre-pick keeps working for users who have presets both cloud-synced and locally imported. Other consumers of/slicer/presets(Profiles page, etc.) retain the existing cloud-first dedup. Sidecar (orca-slicer-api fork,bambuddy/profile-resolverbranch):/slicenow accepts up to 16 repeatedfilamentProfileparts (was hard-capped at 1), the slicing service materializes each asfilament_N.jsonand joins paths into a single--load-filaments "a.json;b.json;c.json"invocation;/profiles/bundledlisting was extended withfilament_typeandfilament_colourper leaf so the bundled tier carries metadata into the modal. Sliced-archive card now reflects the actually-used filament list, not the project-wide AMS config:slice_and_persist_as_archivepreviously copiedfilament_typeandfilament_colorfrom the unsliced source archive verbatim, which inherited every project-wide AMS slot (16+ swatches on the card for a 2-color print). The new archive now reads those fields from the sliced output'sslice_info.configviaThreeMFParser(which already gates onused_g > 0), falling back to the source archive's values only if parsing failed. Backwards compatibility:SliceRequestschema accepts three shapes — legacyfilament_preset_id: int, source-aware singularfilament_preset: PresetRef, multi-color arrayfilament_presets: list[PresetRef]— the validator promotes any of them into a populatedfilament_presetslist before the route handler runs, and stale browser tabs from before this change keep working unchanged. Permissions: no new endpoint paths added; the preview-slice runs inside/filament-requirements(gated onLIBRARY_READ/ARCHIVES_READ) and the multi-filament dispatch runs insidePOST /slice(gated onLIBRARY_UPLOAD) — no auth surface widened. Tests: 6 schema tests forSliceRequestcovering the multi-filament list shape and legacy-vs-new precedence; 9 unit tests forslice_previewcovering happy path, content-hash invalidation, sidecar-failure no-cache-poison, concurrent-call thundering-herd guard via per-keyasyncio.Lock, and LRU eviction-with-lock-cleanup; 15 unit tests forextract_project_filaments_from_3mf(5 cases) andextract_plate_extruder_set_from_3mf(10 cases including the 60/40 painted-threshold pin); a multi-filament wire-format test onslice_with_profilespinning that N filament profiles produce N repeated multipart parts in submission order; 22 frontend SliceModal tests covering the plate picker step, multi-color rendering, metadata-aware pre-pick, manual slot override, archive-vs-library routing, and the new tier order. Localised across all 8 UI languages (English + German fully translated, the six others seeded with English copies pending native translation per the project's existing flow). - Slicer presets now span Cloud, imported, and slicer-bundled tiers, end-to-end — Initial slicer integration only saw DB-backed local imports, so a user without imported profiles got an empty Slice modal even when their Bambu Cloud account or the slicer sidecar carried perfectly usable presets. The Slice modal now pulls from three tiers in priority order — cloud (the user's own Bambu Cloud presets), local (DB-backed imports), standard (slicer-bundled stock profiles) — with name-based dedup so a preset that exists in multiple tiers only renders in the highest-priority one (cloud > local > standard) and within-tier order is preserved exactly. Listing (
GET /api/v1/slicer/presets): cloud branch is per-user with a 5-minute cache keyed on(user_id, sha256(token)[:16])so a logout/login or token rotation auto-invalidates without callback wiring from the cloud-auth routes. Bundled branch is global with a 1-hour cache (sidecar's read-only filesystem only changes across image rebuilds).cloud_status(ok/not_authenticated/expired/unreachable) drives a precise modal banner instead of an unexplained empty list. Slicing (POST /library/files/{id}/slice,POST /archives/{id}/slice): request body now accepts source-aware{source, id}triplets per slot (cloud / local / standard) alongside the legacy*_preset_idfields for full backwards-compatibility — the schema validator normalises bare integer ids intoPresetRef(source='local', id=str(int))so the dispatcher only deals with one shape. Newpreset_resolverservice fetches the preset content per source: cloud viaBambuCloudService.get_setting_detail(unwraps thesettingenvelope, falls back to top-level on minor shape variants), local from the DB (existing path), standard via a minimal{inherits: <name>, from: "system"}stub that the sidecar'sbambuddy/profile-resolverbranch flattens againstBUNDLED_PROFILES_PATH/<category>/<name>.json— no preset-content round-trip needed for the standard tier. Permissions: the listing route gate matches the slice action itself (LIBRARY_UPLOAD) so any user who can slice can populate the dropdowns; the cloud branch has an independentCLOUD_AUTHcheck inside the fetch helper — a user holdingLIBRARY_UPLOADbut notCLOUD_AUTHdoesn't see the cloud tier (and can't slice with a cloud preset, returns 403) even if a leftoverUser.cloud_tokensurvived a permission revocation. SliceModal (frontend): grouped<optgroup>per tier with localised section headers, default-selection follows the cloud > local > standard priority on first load, cloud-status banner with three variants (sign-in / expired / unreachable) only when the status isn'tok. Sidecar (orca-slicer-api fork,bambuddy/profile-resolverbranch): newGET /profiles/bundledwalksBUNDLED_PROFILES_PATH/{machine,process,filament}and returns instantiable presets only (instantiation: "true"), filtering out abstract bases likefdm_filament_plaso the dropdowns only offer things a user can actually pick. Tests: 17 unit tests for the listing endpoint helpers (dedup priority + per-slot scoping + order preservation, all fourcloud_statusstates,CLOUD_AUTHdefence-in-depth with token lookup short-circuit, per-user cache isolation, token-change cache invalidation, sidecar-unreachable fallback), 11 unit tests for the source-aware resolver (standard inherits-stub shape, local DB lookup withpreset_typevalidation, cloud envelope unwrapping with both standard and top-level shapes, cloud auth-error → 401, cloudCLOUD_AUTHdefence, slot dispatch routing), 6 schema tests forSliceRequestcovering legacy bare-int normalisation and new source-aware refs and explicit-ref-wins-over-legacy precedence, 12 frontend tests for SliceModal covering tier-priority auto-selection,<optgroup>grouping, fallback when higher tiers are empty, source-aware payload on submit, manual override across tiers, archive-vs-library routing, error display, and all three banner variants. All 3391 backend + 1531 frontend tests pass. - Server-side slicing via OrcaSlicer / Bambu Studio sidecar — Bambuddy can now slice models without a desktop slicer installed. New optional
slicer-api/Compose stack runs HTTP wrappers around the OrcaSlicer and/or Bambu Studio CLI; Bambuddy's File Manager and Archives pages get a Slice button that picks a printer / process / filament preset and dispatches a background slice job whose result lands as a new.gcode.3mfin the same library folder (or as a new archive when the source was an archive). Settings → Workflow gets a new Slicer card: pick the preferred slicer, toggle "Use Slicer API" on, and paste the sidecar URL — Slice buttons across File Manager, Archives, and MakerWorld then route through the API instead of the OS slicer URI scheme. Status updates come from a globalSliceJobTrackerProviderthat polls/api/v1/slice-jobs/{id}and surfaces a single toast per job (queued → running → completed / failed) plus auto-refreshes the file or archive list on success — slicing one file no longer pins the modal. Server side, a fresh in-memory dispatcher (backend/app/services/slice_dispatch.py) runs jobs asasyncio.create_tasks with a 30-minute retention sweep, and the routes (POST /library/files/{id}/slice,POST /archives/{id}/slice) return 202 immediately with{job_id, status, status_url}instead of holding the request open through a multi-minute slice. The CLI bridge (backend/app/services/slicer_api.py) distinguishes 4xx (SlicerInputError), 5xx (SlicerApiServerError), and connection failures (SlicerApiUnavailableError) so 3MF inputs can transparently retry with embedded settings when the sidecar's--load-settingspath segfaults on the input — empirically required for OrcaSlicer 2.3.x + H2D and signalled to the UI viaused_embedded_settings: true. Sliced output is forced to.gcode.3mfso File Manager picks up the embedded thumbnail, theprint_nameis dropped from saved metadata so the displayed filename matches what the user picked, andfile_type="gcode"paints the badge blue. The polling endpointGET /api/v1/slice-jobs/{id}is gated onLIBRARY_READsince job IDs are sequential and the body leaks source filenames + resulting library/archive IDs. The sidecar itself builds from a fork of AFKFelix/orca-slicer-api (maziggy/orca-slicer-apibambuddy/profile-resolver) which adds theinherits:chain resolver,from: "User"→"system"rewrite,#clone-prefix strip, and sentinel-value strip empirically required to slice real OrcaSlicer GUI exports without segfaulting the CLI; the Compose file uses Docker's git-build-context so users don't clone it manually. Default ports are 3003 (orca) and 3001 (bambu-studio) — 3000/3002 are skipped because Bambuddy's virtual-printer feature owns them. 10 backend integration tests cover sync validation (404/400), happy-path enqueue, preset-error → failed job, sidecar unreachable, the 3MF embedded-settings fallback, STL no-fallback, and the strip-before-forward path; 5 new frontend tests for the SliceModal cover preset gating, library + archive enqueue paths, error display, and preset-load failure. New i18n keys underslicer.*andsettings.slicer.*across all 8 locales (English fully translated; the seven other locales seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). Slicer integration is opt-in: if "Use Slicer API" stays off, the existing "open in desktop slicer via URI" flow is the default and unchanged. - Per-spool category + low-stock threshold override (#729 — minimal version) — Two new fields on the spool form: a free-text Category (with autocomplete from categories already in use, so users naturally re-use "Production" instead of accidentally typing "production" / "prod") and a per-spool Low-stock threshold (%) override that defaults to the global setting if left blank. Powers the "I want to differentiate critical spools from prototype spools and alert at different thresholds" use case from the issue without taking on the full multi-tag taxonomy + auto-apply-rules + per-tag alert system the ticket originally proposed (which would have been ~5x the work for the same underlying value). Inventory page gains a Category filter chip — only renders once at least one spool carries a category, otherwise hidden so the chip row stays uncluttered. Low-stock counts in the stat-card and the "Low Stock" filter both honour the per-spool override (so a "Production" spool with override = 90% will count as low-stock at 80% remaining even when the global threshold is 20%). 50-char cap on category, 1-99% range on threshold (0 and 100 are both rejected as footguns). 9 new backend schema-validation tests covering the field defaults, partial-update behaviour, range/length rejection; 2 new frontend tests confirming the per-spool threshold pulls in spools the global threshold misses, and that the category filter chip stays hidden until at least one spool has a category. Localised across all 8 UI languages with full translations. The full multi-tag taxonomy from the original issue isn't going forward; if demand for it grows past the current 3 thumbs-up the design can layer on top of these fields without breakage.
- Per-event ntfy priority (#990) — ntfy supports a
Priorityheader (1=min, 2=low, 3=default, 4=high, 5=urgent) that drives sound, visibility, and push behaviour on the receiving device, but the existing notifier sent every event at the server default — so a "50% complete" ping looked identical to "print failed" or "printer offline". The Add/Edit Notification modal now renders a per-event "ntfy Priority" section (visible only when the provider type isntfy) listing each enabled event with its own Min / Low / Default / High / Urgent dropdown; selections persist into the provider'sconfig.event_prioritiesmap and the backend emits a matchingPriority: Nheader on the ntfy POST/PUT request (including the image-attachment path). Events not explicitly mapped, malformed values, and out-of-range values (0, 6, "abc", null) all fall through to ntfy's server-side default — there is no clamping, so a misconfigured value never silently sends at the wrong urgency. Test sends (noevent_typecontext) deliberately omit the header so the test path cannot accidentally page someone at urgent priority. Existing providers withoutevent_prioritiesare untouched on upgrade. Localised across all 8 UI languages with full translations (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). 6 new backend tests covering header set on mapped event, omitted on unmapped event, omitted when noevent_prioritiesconfigured, omitted whenevent_typeis missing, ignored for out-of-range / non-numeric values, and propagated through the image-attachment PUT path. - Long-lived camera-stream tokens for HA / Frigate / kiosks (#1108) — The existing
?token=…camera-stream tokens expire after 60 minutes which forced home-automation integrations (Home Assistant cards, Frigate, hallway kiosks) to either refresh on a cron or run with auth disabled. New self-service "Camera API Tokens" panel under Settings → API Keys (also reachable via the existing settings search box — type "camera token" / "frigate" / "home assistant") lets any user holdingcamera:viewmint a long-lived token they can paste once and forget. Revoke uses Bambuddy's standard styled confirmation modal (nowindow.confirmbrowser default — same pattern as the rest of the app). Tokens are scoped strictly to camera streaming (no privilege escalation surface — no other endpoint accepts them), formattedbblt_<8-char-prefix>_<32-char-secret>, and stored as a pbkdf2 hash so even a DB dump can't replay them; the plaintext is shown to the user exactly once in a copy-to-clipboard modal (with adocument.execCommand('copy')fallback for plain-HTTP LAN deployments wherenavigator.clipboardis gated by the secure-context requirement). Hard 365-day max — the issue'sexpire_in: 0(never) is explicitly rejected because an irrevocable infinite token is a footgun-by-design; UI defaults to 90 days, the cap is enforced both client-side (input clamp) and server-side (validation guard). Owners can revoke their own tokens; admins additionally see an "All users" view for leak triage and can revoke anyone's. The/camera/stream?token=…auth dependency tries the existing 60-min ephemeral row first (no behaviour change for the common browser case) and falls through to the long-lived path, so the SPA's existing camera flow is unaffected. Indexedlookup_prefixkeeps verify O(1) per token even on large installs — pbkdf2 only runs against the one candidate row that matches the prefix, never the whole table. Newlong_lived_tokenstable (separate fromauth_ephemeral_tokensbecause the lifecycle is different — user-owned, named, revocable, hashed; and separate fromapi_keysbecause that one is for global webhooks with no user FK and a different permission shape). 15 unit tests covering create-validation/scope/expiry rules, verify happy/garbage/expired/revoked/scope-mismatch/prefix-collision paths, list-by-user vs list-all, idempotent revoke; 14 integration tests covering the create-once-then-listing-hides-plaintext contract, the 365-day cap, the auth gate, owner-vs-admin revoke ownership rules, and that the long-lived token verifies through the same camera-stream auth dependency the route uses (and that revoke immediately invalidates it). 6 frontend tests covering list render, empty state, create-then-shown-once flow, days-input clamp, revoke-with-confirm, and revoke-cancelled paths. NewcameraTokens.*keys across all 8 locales (English fully translated; the seven other locales seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). - Tailscale integration for virtual printers (builds on #1070 by legend813) — Opt-in per-VP Tailscale toggle brings each virtual printer into the tailnet, so it's reachable from any tailnet device over a private WireGuard tunnel without port forwarding or public exposure. When enabled, Bambuddy provisions a Let's Encrypt cert for the VP's MagicDNS hostname via
tailscale certand the MQTT/FTPS listeners serve it. Slicer-side caveat worth knowing up front: both Bambu Studio and OrcaSlicer only accept IP addresses (not hostnames) in the Add Printer dialog, so the LE cert's hostname validation doesn't apply — users still need the Bambuddy CA imported into the slicer, same as LAN mode. The practical benefit here is the private tunnel (remote access without DDNS / port forwarding / public exposure), not cert-import elimination. Default is opt-out (toggle off) so users without Tailscale don't see cert-provisioning attempts or log noise. When a user flips the toggle on a host without a working Tailscale binary, the backend returns409 tailscale_not_availableand the UI reverts + surfaces a specific toast pointing at the setup steps (install Tailscale →tailscale up→tailscale set --operator=<user>→ enable HTTPS in the tailnet admin console). Docker image now ships thetailscaleCLI pre-installed; users wire up by uncommenting the/var/run/tailscale/tailscaled.sockvolume mount indocker-compose.yml. The MagicDNS hostname is surfaced on the VP card with a copy-to-clipboard button (modernnavigator.clipboardin secure contexts,document.execCommandfallback for plain-HTTP contexts with textarea cleanup infinally). Cert renewal runs daily in-process and restarts only the affected VP's TLS listeners. New i18n keysvirtualPrinter.tailscaleDisabled.{title,description}+virtualPrinter.toast.{tailscaleNotAvailable,copyFailed}across all 8 locales with full translations. 3 new backend integration tests for the 409 guard, 2 unit tests for the_cancel_restart_taskself-await guard, 4 unit tests for the settings-dedupe migration, and 3 new frontend tests for the clipboard fallback path. Thanks to legend813 for the original opt-out toggle PR that this was built on top of. - Library Trash Bin + Admin Bulk Purge + Auto-Purge (#1008) — Library files now move to a trash bin on delete instead of being hard-deleted from disk, with a configurable retention window (default 30 days) before a background sweeper permanently removes them. Admins get a new "Purge old" action on the File Manager that shows a live preview of count + total size before moving every file older than N days (with an opt-in toggle for never-printed files, on by default) into the trash in one shot. A new Auto-purge setting in Settings → File Manager runs the same purge automatically on a 24-hour cadence when enabled — files still go to Trash first so the retention window remains the safety net; default-off so existing installs don't surprise anyone. Both the per-user delete flow and the admin bulk purge go through the same trash — regular users see and manage their own trashed files; admins see everyone's. External (linked) files bypass trash and keep the original hard-delete behaviour since their bytes aren't under Bambuddy's control. New
library:purgepermission gates the admin operations; retention is adjustable inline on the Trash page for admins. Adds nullabledeleted_atcolumn onlibrary_fileswith an index (dialect-aware migration:DATETIMEon SQLite,TIMESTAMPon PostgreSQL, since rawDATETIMEis SQLite-only syntax); everyLibraryFilequery site now routes through a newLibraryFile.active()classmethod so trashed rows can't leak into listings, print dispatch, MakerWorld dedupe, or stats. 17 new backend integration tests + 8 new frontend component/page tests; localised across all 8 UI languages. Thanks to cadtoolbox for the proposal and the follow-up answers that tightened the spec. - Archive Auto-Purge (#1008 follow-up) — Settings → Archives now has an auto-purge toggle plus a Purge archives now action on the Archives page header (next to Upload 3MF, mirroring File Manager's placement) that hard-deletes print archives not printed within a configurable window (default 365 days, min 7, max 10 years) with the same live-preview modal as the library purge. Reprinting an archive reuses the row and updates its
completed_at, so the purge honours the most recent print completion — a two-year-old archive you reprinted yesterday is not eligible for deletion. Unlike the library trash, archives are hard-deleted: print history is a decaying timeline, so there is no trash bin intermediate; download or favourite anything you want to keep first. The sweeper runs on the same 15-minute scheduler as the library trash but throttles actual purge runs to once per 24h so a tight tick cadence doesn't churn the DB. Each purged archive goes through the existing safety-checkedArchiveService.delete_archivepath so the 3MF, thumbnail, timelapse, source 3MF, F3D, and photo folder are all cleaned up together with the DB row. Gated by a new dedicatedarchives:purgepermission (Administrators group by default, backfilled on upgrade); 9 new backend integration tests; localised across all 8 UI languages. - MakerWorld Integration — Paste any
makerworld.com/models/…URL on the new MakerWorld sidebar page to pull the full model metadata, plate list, creator/license info, and per-plate images, then one-click Save or Save & Slice in Bambu Studio / OrcaSlicer per plate. Closes the last workflow gap for LAN-only users who still had to keep the Bambu Handy app installed solely to send MakerWorld models to their printers. Reuses the existing Bambu Cloud login token for download authentication — no separate OAuth flow, no companion browser extension, no cookie paste.LibraryFilenow trackssource_type+source_url, so re-importing the same plate dedupes to the existing library entry. Search / browse-catalogue is intentionally out of scope because MakerWorld's public search endpoint isn't reachable from a server-originated request; the URL-paste flow covers the actual discovery pattern (Reddit / YouTube / shared links).
Endpoint route (non-obvious, ~1 day of reverse engineering) — Pr0zak/YASTL#51 documented thatmakerworld.com-hosted design-service endpoints are cookie-gated (Cloudflare WAF serves a generic "Please log in to download models" to any non-browser bearer request), but the same backend is exposed unblocked atapi.bambulab.com. The working path turned out to beGET https://api.bambulab.com/v1/iot-service/api/user/profile/{profileId}?model_id={alphanumericModelId}withAuthorization: Bearer <cloud_token>— a different service (iot-service, notdesign-service) and a different host, accepting the same bearer the user already signs in with. Response carries a 5-minute-TTL presigned S3 URL (s3.us-west-2.amazonaws.com/…?at=…&exp=…&key=…). ThemodelIdquery param is the alphanumeric identifier (e.g.US2bb73b106683e5) that only appears in the design response body, not the integerdesignIdfrom the/models/{N}URL — so the import flow fetches design metadata first, readsmodelId, then calls iot-service. S3 presigned URLs must be fetched withurllib.request(not httpx / curl_cffi) because the signature is computed over the exact query-string bytes and any normalising encoder breaks it withSignatureDoesNotMatch400s (YASTL#52 describes the same issue). Every other published reverse-engineering project we evaluated (schwarztim/bambu-mcp, kata-kas/MMP) solved the gating by shipping "paste your browser cookie" flows; reusing the existing Bambu Cloud bearer is a substantially cleaner UX and the only fully-automated path.
UI and UX features — per-plate picker with inline Save / Save & Slice in Bambu Studio / OrcaSlicer buttons, Import all to batch-import every plate sequentially, folder picker on the page (default: auto-created top-level "MakerWorld" folder), image gallery lightbox per plate (keyboard ←/→/Esc), two-column sticky layout with Recent imports sidebar (last 10 MakerWorld imports), per-plate inline follow-up actions after import (View in File Manager / Open in Bambu Studio / Open in OrcaSlicer / Remove from library), per-plate delete via the standard Bambuddy confirm modal (no browserconfirm()), elapsed-time + phase label ("Resolving … 3 s", "Downloading … 18 s") during the synchronous import POST so users see progress on large 3MFs, URL-change detection that drops the preview when the pasted URL diverges from the resolved one (fixes a class of "I thought I was importing model B but got A" dedupe confusion), rich error toasts per-phase, and the slicer-open path reuses Bambuddy's existing token-embedded library download (/library/files/{id}/dl/{token}/{filename}) so the handoff works even with auth enabled. Localised across all eight UI languages.
Security hardening — the MakerWorld description HTML is user-authored and goes throughDOMPurify.sanitize()beforedangerouslySetInnerHTML.<img>tags inside summaries are rewritten to route through Bambuddy's/makerworld/thumbnailproxy so the SPA'simg-src 'self' data: blob:CSP stays unwidened. Thumbnail proxy now usesfollow_redirects=False(the host-allowlist guarantee is only meaningful on the initial URL — a 302 to169.254.169.254would otherwise bypass it). The 3MF CDN fetch sends onlyUser-Agent— the Bambu Cloud bearer is never forwarded to the CDN. S3 presigned-URL fetch uses aurllib.requestopener with a no-opHTTPRedirectHandlerfor the same reason. Filenames from MakerWorld responses areos.path.basename'd before persisting, so a maliciousname: "../../evil.3mf"cannot surface a path-traversal string into the DB / UI (on-disk storage uses a UUID filename regardless). New routes respect theMAKERWORLD_VIEW(resolve / recent-imports / status) andMAKERWORLD_IMPORT(import) permissions. SSRF guard on downloads rejects any host that isn'tmakerworld.bblmw.com,public-cdn.bblmw.com, or a.amazonaws.comsubdomain.
Test coverage — 46 unit tests forservices/makerworld.py(header shape, API base,get_design/get_design_instances/get_profile,get_profile_download200/401/403/404/no-token,download_3mfSSRF rejection of 4 hostile hosts, S3 path delegation, CDN path with minimal headers, size-cap,_download_s3_urllibhappy/redirect/size/network paths,fetch_thumbnailwithfollow_redirects=False); 19 route tests (/resolve,/importwith folder autocreation + explicit folder + dedupe + filename basename + profile_id response,/recent-importswith empty-list / ordering / pydantic shape / limit clamping,_canonical_urlunit); 12 frontend tests (button labels, slicer-name interpolation, URL-change detection, inline post-import actions, Recent imports rendering, DOMPurify<script>strip). - SpoolBuddy kiosk no longer shows main-app toasts — the global
ToastProvider(inApp.tsx) wraps both the main app routes and the SpoolBuddy kiosk routes, so the background-dispatch progress overlay (job percent, completion summaries, etc.) was rendering on the kiosk display alongside any in-flight prints. Added asetViewportSuppressedsetter on the toast context;SpoolBuddyLayoutflips it on mount and restores on unmount via a singleuseEffect. The state machine, dispatch-event subscription, and other tabs' toast UIs are untouched — only the visible viewport is hidden while a kiosk display is active. Trade-off accepted: kiosk-local one-shot toasts (plate-clear confirmation, quick-add errors) are also hidden, but the kiosk's UI already provides direct visual feedback (the plate-ready row vanishes on click; quick-add failures surface in the modal). UpdatedSpoolBuddyLayout.test.tsxto wrap inToastProviderand expand its lucide-react mock with the icons ToastContext imports. 2 new regression tests:ToastContext.test.tsx::viewport suppressionpins the suppressed-viewporthiddenclass toggle without affecting the underlying state, andSpoolBuddyLayout.test.tsx::suppresses the global toast viewport while mountedconfirms the kiosk layout flips suppression at mount and cleanup. - Background-dispatch toast no longer reads as "frozen at 100%" for fast uploads — small files (a few hundred KB to a printer over LAN) finish FTP upload in <500ms, so the progress bar would jump to 100% and then sit there for ~1-2s while the printer's MQTT confirmation landed and the success toast replaced the dispatch toast. Now, when the byte-count reaches the total but the job status is still
processing(i.e. upload done, awaiting printer ack), the byte-count line is replaced with "Awaiting printer..." and the progress bar getsanimate-pulseto indicate continued activity. Translated across all 8 locales (backgroundDispatch.awaitingPrinter). 2 new tests inToastContext.test.tsx::background dispatch — upload-done UXcover the threshold (uploadProgressPct >= 99.9withprocessingstatus switches to "Awaiting printer..." + pulse) and the in-flight case (50.0%keeps the byte/percent counter, no pulse). - SpoolBuddy kiosk: "Plate ready" pills under the printer status badges — when any printer reports
awaiting_plate_clear=true, a small amber pill appears in the dashboard's left column, sized to match the existing online/offline printer badges. Each pill shows the printer name plus a "Clear" action; tapping it callsPOST /printers/{id}/clear-plateand optimistically removes the pill from the UI before the WebSocket round-trip lands. Multi-printer setups (e.g. four H2Ds finishing at once) wrap inline viaflex-wrapso the dashboard stays compact instead of pushing everything else off-screen. The kiosk's API key already passes theprinters:clear_platepermission gate via the existing_APIKEY_DENIED_PERMISSIONSdenylist (the permission is intentionally not denied — clear-plate is an inventory-flow operation, not an admin one), so no auth wiring changes were needed. Translated across all 8 UI languages (en/de/fr/it/ja/pt-BR/zh-CN/zh-TW). 5 new regression tests inSpoolBuddyDashboard.test.tsx::plate-clear rowcover: row hidden when no printer is pending, mixed pending/non-pending printers (only the pending one gets a pill), title attr + pill text content + Clear label all rendered, clicking callsapi.clearPlate(printerId), the optimistic cache write makes the row vanish without waiting for a refetch, and three concurrent pending printers wrap inline in the sameflex-wrapcontainer. The mockuseTranslationwas upgraded to support{{var}}interpolation so future tests can assert on rendered i18n strings with arguments. - Per-request trace ID column on every log line, plumbed through HTTP access log + application logs + response headers — Builds on the new uvicorn-access-log-into-bambuddy.log change below: the access line tells you who called an endpoint, but until now there was no way to tie that line to the application records emitted on the server side while handling that request. A new FastAPI middleware (
trace_id_middlewareinmain.py, sourced frombackend.app.core.trace) stamps each request with a fresh 8-char hex ID (or honours a sane inboundX-Trace-Idheader for cross-system correlation), stores it in aContextVarso any code in the request's call stack can read it, echoes it on the response asX-Trace-Id, and a newTraceIDFilterinjects it into everyLogRecordso the format string[%(trace_id)s]resolves to the right ID for the right request. ContextVars (rather thanrequest.state) are the right plumbing here because asyncio copies the current context into everyasyncio.create_task, so background work spawned from inside a request inherits the trace ID without explicit threading; the logging filter has no access to the FastAPI request object regardless. Records emitted outside any request scope (startup, MQTT callbacks, scheduler) get a stable-placeholder so the column stays visually aligned and missing values are obvious ingrep. InboundX-Trace-Idis hard-validated against a strict whitelist ([A-Za-z0-9_-]+, max 64 chars) before being honoured — a hostile or buggy caller cannot smuggle log-injection payloads (newlines, control chars, megabyte blobs) intobambuddy.logvia the trace-ID column; values that fail the gate silently trigger a freshly minted server-side ID rather than failing the request. Middleware is decorated AFTERauth_middlewareon purpose: Starlette stacksapp.middlewaredecorators LIFO so the last-decorated runs first inbound, making trace stamp the OUTERMOST layer — auth log lines and every record emitted on the way down to and back from the route handler all carry the same ID. Output now looks like2026-04-26 09:51:39,152 INFO [uvicorn.access] [a4f3b1e7] 192.168.1.42:54812 - "POST /api/v1/printers/1/print/stop HTTP/1.1" 200paired with the route handler's2026-04-26 09:51:39,158 INFO [bambu_mqtt] [a4f3b1e7] [SERIAL] Sent stop print command— onegrep a4f3b1e7away from the full causality chain. 30 new tests acrosstests/unit/test_trace.py(placeholder when no request scope, filter copies ContextVar value onto records, ID propagates into spawned tasks via asyncio context copy, concurrent requests don't leak IDs into each other, generator produces unique hex IDs, hostile payloads rejected by validator, max-length boundary, dash/underscore variants accepted) plustests/integration/test_trace_middleware.py(X-Trace-Id header echoed on response, body and header IDs match, each request gets a unique ID, generator format stays short hex, safe inbound IDs honoured, hostile inbound IDs replaced, overlong inbound IDs replaced, ContextVar reset cleanly after request).
Changed
- AMS slot "Assign to inventory spool" picker now lists every spool, including RFID-tagged Bambu Lab ones (#1133) — The picker that opens from
<FilamentHoverCard>/ SpoolBuddy's slot-action sheet had two stacked filters that together blocked a real workflow: (1)AssignSpoolModalonly listed spools whosetag_uidANDtray_uuidwere both null, hiding any Bambu Lab spool that had been auto-created from RFID or scanned via SpoolBuddy NFC; (2)FilamentHoverCardrendered its inventory section (assign + unassign affordances) only when the slot's vendor was notBambu Lab, so even if you fixed the picker the button to open it wasn't visible on a BL slot. The use case both filters blocked: a user who has a Bambu Lab spool sitting in their inventory but doesn't want to scan it via SpoolBuddy NFC each time and just wants to pick it from the list. Both gates are gone now: the modal lists every spool that isn't already taken by a different (printer / ams_id / tray_id) tuple, and the hover-card inventory section renders for every vendor including Bambu Lab. The AMS-vs-external-slot distinction in the modal also collapsed — external slots (amsId 254/255) used to be the only path that allowed picking a tagged spool, and that special-case is now redundant. Empty slots (<EmptySlotHoverCard>in Bambuddy,slotActionPicker.tray === nullin SpoolBuddy) lost their assign affordance entirely: a physically empty slot has no spool to attach an inventory record to, and offering the action there only led to users assigning the wrong spool to a slot the printer hadn't actually loaded yet — assignment now requires a loaded slot. Thei18n.inventory.noManualSpoolskey (whose copy talked specifically about "manually added spools") was renamed toinventory.noAvailableSpoolswith new copy ("No spools available. Add a spool to your inventory or unassign one from another slot first.") since the empty-state premise changed; localised across all 8 languages with full translations. 5 net-new frontend tests in__tests__/components/FilamentHoverCard.test.tsx(assign/unassign buttons render forvendor: 'Bambu Lab', non-BL vendors unchanged, EmptySlotHoverCard renders no assign affordance, configure button still works on empty slots) plus the existingAssignSpoolModal.test.tsx"filters out BL spools" expectation was inverted to match the new contract and the empty-state test reworked to exercise the only remaining trigger (every spool taken by another slot). - Inventory: "Delete Tag" button renamed to "Clear RFID Tag" (#729 follow-up) — The reporter mistook the button for a taxonomy-tag delete (it actually clears the RFID tag UID/UUID off the spool record so the row can be re-attached to a different physical spool). Renaming it to "Clear RFID Tag" + the success toast to "RFID tag cleared" removes the ambiguity. No behaviour change. Localised across all 8 UI languages with full translations.
- Nozzle icon on the dual-nozzle status card (#1115) — the dual-nozzle active-extruder card on the printer status bar was the only card in that row without a theme icon (the Nozzle/Bed/Chamber temperature cards all carry a thermometer icon), which left the row looking visually uneven on H2D / H2S / H2C. Adds a small schematic nozzle icon (filament body + heater block + tip) above the L/R diameter labels, styled in amber-400 to match the card's active-extruder accent. SVG design contributed by m4rtini2.
- Slice tracker no longer shows the "embedded settings used" warning toast —
SliceJobTrackerContextwas emitting a yellow warning toast on every completed slice whose result carriedused_embedded_settings: true(the auto-fallback path that fires when the sidecar's--load-settingstriplet rejected the input). For 3MF inputs that fallback fires on essentially every slice in production (BambuStudio CLI segfaults silently on--load-settingsover 3MF, even with the broader strip applied — verified end-to-end with the new sidecar stderr capture), so the toast was firing on essentially every completed slice and adding noise without a useful action. Theused_embedded_settingsflag still lands onSliceResponse/SliceArchiveResponsefor tests + observability (test_library_slice_api.py:347continues to pin it); only the user-facing toast goes.slice.fallbackUsedEmbeddedremoved from all 8 locale files in the same change. - Settings page: permission-gated instead of admin-only — the Settings sidebar entry has always been visible to any user holding
settings:read, but the route guard required admin role, so a non-admin withsettings:readwould see the entry, click it, and get silently redirected back to the dashboard. The route guard now matches the sidebar: any user withsettings:readcan open the page, and the individual tabs / cards continue to enforce their own per-feature permissions (users:read,groups:update,oidc:*, etc. — many of them admin-only, some not). Group editor routes moved to permission-based guards too (groups:createfor/groups/new,groups:updatefor/groups/:id/edit), so permission delegation works end-to-end. Admins retain full access since admins implicitly hold every permission.
Fixed
- H2D Pro multi-plate dispatch double-/triple-fire (#1157) — Scheduling 3 plates of a multi-plate file to the same H2D Pro caused the scheduler to fire all three
project_filecommands within ~60 seconds, even though the printer hadn't transitioned out ofFINISHfor the first one yet. The H2D Pro can sit atFINISHfor 80–210 s after acceptingproject_filebefore thegcode_stateflips toPREPARE, and during that window the existing DBbusy_printersseed (querying queue items inprintingstatus) was empirically missing the in-flight item — observed in support logs as items 139/140/141 all dispatching with status='printing' yet only the third actually triggering a state transition. User-visible symptoms: layer count flapping, all queued plates showing as printing simultaneously, MQTT disconnect storms (33 in a single 5-minute window), eventual print failure. Root-cause fix is a defensive in-memory dispatch hold layer inprint_scheduler.py: when_start_printsucceeds we record(printer_id, dispatched_at, pre_state, pre_subtask_id), and the nextcheck_queuetick adds that printer tobusy_printersuntil either (a) the watchdog observes a state/subtask transition (success path — release immediately past a 60 s minimum cooldown), or (b) a 180 s hard timeout expires (escape hatch for lost MQTT sessions). The minimum cooldown also prevents a spurious double-dispatch if the printer pulses through PREPARE→RUNNING→PREPARE in the first second after acceptance. The hold is purely additive — sits alongside the existing seed query and_is_printer_idlechecks, doesn't depend on DB row visibility, doesn't depend onon_print_completefiring correctly. Per-printer isolation: a hold on printer A never blocks printer B. Edge cases covered by 12 new unit tests (test_scheduler_dispatch_hold.py): no-pre-state fallback (printer was offline at dispatch time), status-unavailable keeps hold (printer disconnected post-dispatch — don't release on missing data), idempotent release, hard-timeout self-cleanup, transition-during-cooldown still holds. The 90 s watchdog still owns the unhappy-path revert (queue item back topendingfor retry) — this fix runs alongside it, not instead of it. All 179 existing scheduler tests still pass unchanged. - Project picker UX in archives (#1151) — The "Add to Project" submenu in the archive context menu was unusable past the visible fold once a project library exceeded the 300px scroll cap: any wheel scroll, arrow-key navigation, or scrollbar click slammed the entire context menu shut. Root cause was a capture-phase
document.scrolllistener inContextMenuthat fired on internal submenu scrolls too — the listener now checksmenuRef.current.contains(e.target)and ignores scrolls inside its own subtree. Project lists are now sorted alphabetically by name (localeCompare) at every assignment site (Archives context-menu submenu ×2, BatchProjectModal, EditArchiveModal, "review new uploads" panel, FileManagerPage project-picker) instead of newest-first from the API. The Archives "Add to Project" submenu and BatchProjectModal both gain a search input (rendered only when there are >5 projects, so small libraries stay clean) that filters the list by name as you type — Enter picks the first match. Newarchives.menu.searchProjectsi18n key in all 8 locales (en/de fully translated, the six others seeded with English copies pending native translation, matching the project's existing flow). - OIDC
auto_link_existing_accountsnow works with custom email claims (Azure Entra ID) (#1088) —auto_link_existing_accountswas previously blocked unless bothemail_claim='email'andrequire_email_verified=True. This also rejected Azure Entra ID configurations usingpreferred_usernameorupnas the email claim — the recommended setup for that provider, which does not sendemail_verified. The guard now only blocks the genuinely unsafe combination (Fall B):email_claim='email'+require_email_verified=False. Custom-claim configurations (Fall C) never consultemail_verifiedat all, so there is no verification-bypass risk on that path. All five enforcement layers (DB CHECK constraint, schema validators for create and update, route combined-state guard, DB migration for existing installations) have been updated consistently. Security note: custom claims are safe for auto-link only when the claim value is tenant-administered. If your IdP allows end users to self-assert the claim's value, do not enable auto-link. An in-app warning is shown in the OIDC provider form when this combination is configured. - OIDC settings form: "Require email verified" toggle no longer jumps layout when auto-link is enabled — When
Auto-link existing accountswas toggled on, the shorter description text caused theRequire email verifiedtoggle to reflow next toAuto-linkin the flex container instead of staying on its own row. Both toggles now havew-fulland always occupy a full row regardless of description length. - P1P print dispatch failed with
0500_4003 "can't parse print file"when the printer was slow to acknowledge (#1150, reported by d3ni3) — On a P1P at firmware 01.10.00.00 the printer can take up to ~135 seconds to actually start parsing a freshly uploaded.3mfafter the MQTTproject_filecommand lands; FTP STOR returns 226 cleanly and the upload is intact, butgcode_statestays atIDLEandsubtask_iddoesn't advance until the printer's slow internal parse completes. Both dispatch watchdogs (_verify_print_responseinbackground_dispatch.pyand_watchdog_print_startinprint_scheduler.py) interpreted the missed transition as a half-broken MQTT session — the original #887/#936 condition where telemetry kept arriving but our publishes were silently swallowed — and calledforce_reconnect_stale_sessionto wipe paho's QoS-1 queue and reconnect with a fresh client_id. That reconnect mid-parse is precisely what makes the P1P emit0500_4003: the new MQTT session interrupts the in-progress parse on the printer side and the printer reports the file as unparseable. The repro: send a print job, wait 15 seconds while the printer is still parsing, watch the watchdog force-reconnect, watch the printer fail with the parse error, retry — same loop. Sending the same file from BambuStudio worked because BambuStudio doesn't reconnect MQTT mid-parse. The fix uses the printer'sgcode_filefield as a definitive discriminator between #1150 (slow parse) and #887/#936 (half-broken session), since both look identical from telemetry alone: in both cases push_status keeps flowing,statestays unchanged, andsubtask_idstays at the pre-dispatch value. The distinguishing signal: when the project_file command actually lands on the printer side, the printer'sgcode_filefield updates in push_status to reflect the newly-uploaded file; if the publish was silently swallowed (#887/#936), the field stays at whatever the printer was previously showing. Both watchdogs now capturepre_gcode_filealongsidepre_stateandpre_subtask_idfromprinter_manager.get_status()before sending the publish, then compare against the printer's currentgcode_fileafter the watchdog times out. If the value changed → command landed → log a#1150warning explaining the skip and leave the MQTT session alone. If the value is unchanged → publish was silently swallowed → fall through to the originalforce_reconnect_stale_sessioncall so the #887/#936/#1136 zombie-session recovery is preserved exactly. The user-facing dispatch still fails on timeout (correctly — the print didn't start within the timeout window so the job is marked failed), the queue item still reverts topendingso the scheduler can retry, and the next dispatch attempt proceeds against the same intact MQTT session that was about to start the print. Pairs with the 15s → 90s timeout bump that already shipped in commit 9d04186 (the original 15s timeout was a separate v0.2.3.2 limit). Caveat acknowledged in code comments: in a retry-same-file slow-parse scenario the printer'sgcode_filelooks identical before and after the publish lands, so the watchdog falls through to the original reconnect path and the user still sees0500_4003on that specific retry — accepted to avoid breaking the half-broken-session recovery, which is the more impactful regression of the two. 4 new unit tests covering both watchdogs: skip reconnect whengcode_filechanged (the #1150 fix), reconnect whengcode_fileis unchanged (the #936 protection preserved), skip reconnect whenpre_gcode_file=Noneand current is non-None (printer just connected), reconnect whenpre_gcode_filearg is omitted (backward-compat for callers we haven't updated). All 439 existing dispatch / scheduler / mqtt tests still pass unchanged. - 3MF profile-driven slicing silently produced wrong-printer output (every 3MF slice fell back to the source's embedded printer regardless of the picked profile) — Two stacked bugs in the slice pipeline. (1) Pre-forward strip removed too much.
_strip_3mf_embedded_settingswas scrubbing all four embeddedMetadata/*.configfiles before forwarding the 3MF to the sidecar, on the theory that--load-settingswould then take precedence cleanly. That theory was wrong:Metadata/model_settings.configcarries the plate definitions the CLI needs to map--slice Nto a real plate, andslice_info.config/project_settings.configsupply baseline config the CLI'sStaticPrintConfigspass needs to even start. Stripping any of them caused the CLI to silently exit immediately after "Initializing StaticPrintConfigs" — exit code 0, noresult.json, no stderr — which the sidecar treated as failure and Bambuddy then masked by falling back toslice_without_profilesusing the un-stripped bytes (and the source's embedded printer). Net effect: every 3MF slice with profiles silently produced wrong-printer output. The strip is now gone from the slicer dispatch path entirely; original bytes go to the sidecar so--load-settingsoverrides only the specific fields the user changed (printer/process/filament) while the embedded plate / model definitions remain intact. (2) Standard-tier preset stubs were missing thetypefield._resolve_standardinpreset_resolver.pyemitted{"name": ..., "inherits": ..., "from": "system"}for the bundled tier, but the CLI's preset parser also requires atypediscriminator (machine/process/filament) on every loaded settings file — without it the CLI silently rejects withrc=-5("input preset file is invalid"), which the same masking fallback then turned into another wrong-printer slice. New_SLOT_TO_PROFILE_TYPEconstant maps each slot to its required type, and the stub now emits the right value per slot. Tests: integration test renamed from "strip removes all four configs" totest_3mf_input_forwarded_unmodified_to_sidecar— asserts everyMetadata/*.configplus3D/3dmodel.modelis preserved verbatim in the multipart body the sidecar receives. Preset-resolver test updated for the new stub shape; newtest_standard_emits_correct_type_per_slotpins each (slot → type) pairing. Pairs with the orca-slicer-api fork'sbambuddy/profile-resolverbranch which now emitsdetailson itsAppErrorresponses and captures CLI stdout/stderr in the failure path so future regressions of this shape produce a real error message instead of a silent fallback. - Sliced-archive card listed every project-wide AMS slot instead of just the filaments the print actually used —
slice_and_persist_as_archivepreviously copiedfilament_type/filament_colorfrom the unsliced source archive verbatim, which inherited every project-wide AMS slot configured in the source'sproject_settings.config(16+ swatches on the card for what was actually a 2-color print). The new archive row now reads those fields from the sliced output'sMetadata/slice_info.configviaThreeMFParser(which already gates onused_g > 0per-slot), falling back to the source archive's values only when parsing the new 3MF failed. Test intest_archive_copy.py::test_filament_metadata_only_includes_filaments_with_used_gbuilds a 4-slot fixture where slots 2 and 4 haveused_g=0and asserts both type and color outputs exclude them. - Slice modal had no warning when the picked printer profile didn't match the source 3MF's bound printer — silent wrong-printer output — Both BambuStudio and OrcaSlicer CLIs reject
--load-settingsfor a printer different from the one the source 3MF was originally bound to (rc=-16"current 3mf file not support the new printer") because the cross-printer "convert project" flow is desktop-Studio only; the slice would then fall back to embedded settings and produce a file sliced for the wrong printer that errored at print dispatch time with "File was sliced for A1, but printing on H2D". The plates response now exposessource_printer_model(read fromproject_settings.config'sprinter_modelfield, with fallback to stripping the nozzle suffix offprinter_settings_id); the SliceModal compares it against the picked printer profile name (substring match against the model prefix, e.g."Bambu Lab H2D 0.4 nozzle"matches"H2D") and surfaces an inline amber warning explaining the limitation, plus disables the Slice button while the warning is up so users can't dispatch a guaranteed-wrong slice. Cloud presets with arbitrary user-chosen names (e.g."My Custom X1C") and legacy 3MFs withoutproject_settings.printer_modelfall through to no-warning, which is a reasonable default — the user picked it knowingly. Newextract_source_printer_model_from_3mfhelper inthreemf_tools.pywith 6 unit tests covering missing/direct/nozzle-stripped/corrupt-JSON paths; 3 frontend tests in SliceModal pinning the warning + disabled-button on mismatch, no-warning on match, and no-warning when the source model is unknown. New i18n keyslice.printerMismatchlocalised across all 8 UI languages. - Sliced output of a "single-color" plate had filaments the user never picked — When a multi-color project (e.g. a MakerWorld Stormtrooper helmet with white shell + grey support filament configured project-wide) was sliced for plate 1 (which only paints with white), the resulting
.gcode.3mf'sslice_info.confighad two filaments — white (the user's pick) and grey (a colour the user never chose). Root cause: the SliceModal was sending only the slots the picked plate consumed, but the slicer CLI requires a profile per project AMS slot — when fewer were supplied, the CLI silently substituted the missing slots from the source 3MF's embedded filament metadata, leaking the original creator's grey support filament into the user's output. Same silent-fallback class as the strip-removal bug. Fix: backend's/filament-requirementsendpoint now returns the FULL project AMS slot list with aused_in_plate: boolflag per entry (computed from the cached preview slice for unsliced files; alwaystruefor sliced files sinceslice_info.configalready pre-filters byused_g > 0). The SliceModal renders one dropdown per project slot — slots flaggedused_in_plate=trueare editable as before, slots flaggedused_in_plate=falseare auto-picked from project metadata via the existing(filament_type, filament_colour)scoring path and disabled with a "— not used by this plate" suffix on the label, so the user only interacts with what matters for their plate while the wire format always carries a profile per project slot. 2 new frontend tests pin the disabled-row rendering and the full-list-on-submit invariant. New i18n keyslice.notUsedByPlatelocalised across all 8 UI languages (English + German fully translated, the six others seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). - "Analyzing plate filaments…" spinner gave no signal that anything was happening on the first Slice-modal open for an unsliced project file — On a multi-color 3MF without slice_info data, the backend runs a preview slice via the sidecar to discover which AMS slots the picked plate actually consumes. That's the only source of truth: tried two heuristics — painted-face quadtree scan (silently missed extruders when
object_idmapping betweenmodel_settings.configand3D/3dmodel.modeldiverged, surfaced as a single dropdown for a 4-color print) and project-wide AMS list (over-rendered every plate to the project's full slot count) — and both produced wrong counts on real-world multi-color projects. Reverted to preview-slice-as-source-of-truth. The result is cached per(kind, source_id, plate_id, content_hash)so re-opens of the same plate are instant, but the first open on a complex model is a real slice (multi-second to multi-minute). The inline spinner now shows elapsed seconds and, after 5s, a hint explaining that this is a one-time preview slice and re-opens will be instant — addresses the original "is anything happening?" complaint without sacrificing correctness. Project-wideextract_project_filaments_from_3mfremains as a final fallback when the sidecar isn't configured. New i18n keyslice.analyzingPlateFilamentsHintlocalised across all 8 UI languages (English + German fully translated, the six others seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). - Settings warning when OrcaSlicer is selected as the preferred slicer — OrcaSlicer 2.3.2 and 2.4.0-dev (latest nightly as of 2026-04-28) have two upstream CLI bugs that together block slicing on most Bambu-authored multi-color / H2D 3MFs: (1) a SIGSEGV in the multi-extruder filament-resolution path on painted 3MFs (OrcaSlicer/OrcaSlicer#12426), and (2) the CLI strict-validates parameter values that BambuStudio writes by default —
solid_infill_filament: 0,tree_support_wall_count: -1,prime_tower_brim_width: -1— and exits 238 withParam values in 3mf/config error: ... not in range, even though OrcaSlicer's own GUI tolerates these (OrcaSlicer/OrcaSlicer#13386, filed alongside this change with a minimal repro 3MF). Both bugs verified reproducible on the latest nightly build before filing. Settings → Workflow → Slicer card now renders an inline amber alert under the preferred-slicer dropdown whenorcasliceris the current selection, linking out to both upstream issues and recommending Bambu Studio until upstream fixes land. The OrcaSlicer option is intentionally left pickable rather than disabled — users who only slice STLs or single-color 3MFs aren't affected by either bug, and forcibly disabling would also affect them. Localised across all 8 UI languages (English + German fully translated). - Live progress for the SliceModal's filament-analysis preview slice + URL-decoded filenames in the toast — Two follow-ups to the live slicer-progress feature: (1) the modal's "Analyzing plate filaments…" preview slice (the real slice that fires before profile picking, to discover which AMS slots an unsliced plate consumes) now shows the same stage + percent live updates as the user-initiated slice. The frontend generates a per-(source, plate) request_id, forwards it via a new
request_idquery param on/library/files/.../filament-requirementsand/archives/.../filament-requirements, the backend plumbs it throughslice_without_profilesto the sidecar, and a newGET /api/v1/slicer/preview-progress/{request_id}proxy endpoint forwards browser polls to the sidecar's/slice/progress/:requestId(CORS-safe — the browser can't reach the sidecar directly). The inline spinner and a new persistent toast both renderAnalyzing {{name}} — {{stage}} ({{percent}}%) — {{elapsed}}while the preview runs; toast dismisses when filaments arrive. (2) MakerWorld imports were persisting URL-encoded filenames (stormtrooper-helmet%20h2d.3mf) verbatim because MakerWorld's API returns the same percent-encoding it uses on its CDN URLs. The import path nowurllib.parse.unquotes both the manifest-supplied name and the URL path-tail fallback before passing tosave_3mf_bytes_to_library, plus the frontend defensivelydecodeURIComponents in the slice toast and analysis-spinner messages so already-imported rows display cleanly without a backfill migration. Falls back to the raw string on malformed encodings (%XYwhereXYisn't hex). New i18n keysslice.previewToast+slice.previewWithProgresslocalised across all 8 UI languages (English + German fully translated). - Live slicer progress in the persistent slice toast — The persistent slice toast already showed elapsed time + a spinner so the user could see the slice was still running, but for long slices on complex multi-color models that "is anything happening?" gap could last minutes. Bambuddy now wires up the slicer CLI's structured progress channel end-to-end, so the toast renders concrete stage labels + live percent —
Stormtrooper.3mf — Generating G-code (75%) — 47s— through the entire slice. Sidecar (bambuddy/profile-resolverbranch of orca-slicer-api): switched the sync/sliceroute fromexecFiletospawnso the process can run alongside an FIFO reader; on each request the route generates (or accepts a caller-supplied)requestId,mkfifos${workdir}/progress.fifo, passes--pipe ${fifo}to the OrcaSlicer / BambuStudio CLI, and reads the structured JSON-line progress events the slicer emits ({"message":"Generating G-code","plate_count":1,"plate_index":1,"plate_percent":80,"total_percent":75}) into a per-processProgressStorekeyed byrequestId. NewGET /slice/progress/:requestIdreturns the latest snapshot; entries linger 30s after slice completion so the caller's last poll still reads the terminal "All done, Success" frame instead of a 404. Both slicer forks share the same code lineage from PrusaSlicer'sBackgroundSlicingProcess, so OrcaSlicer 2.3.2 and BambuStudio 02.06.00.51 emit identical JSON keys (verified by tracing the binary). Bambuddy backend:slicer_api.slice_with_profilesacceptsrequest_id+on_progresscallback and spawns a 1Hz parallel poller that hits the sidecar's progress endpoint while the blocking POST is in flight;SliceDispatchServicegained aset_progress(job_id, snapshot)method and aprogressfield onSliceJob; the slice routes now generate a uuidrequest_idand wire a callback that forwards each snapshot onto the dispatcher.GET /slice-jobs/:idincludesprogresson every poll. Frontend:SliceJobTrackerContextreads the newprogressfield and re-renders the persistent toast with{name} — {stage} ({percent}%) — {elapsed}whenever a useful frame is present, falling back to the existing elapsed-time-only message when the sidecar hasn't emitted anything yet (early "Initializing" phase) or doesn't support progress (older sidecars without the FIFO wiring). 12 sidecar unit tests for the JSON-line parser + ProgressStore (cancellation/grace-window, malformed lines, missing fields), 3 dispatcher tests forset_progress(attach/replace/clear, unknown-job-id silent ignore), 3 slicer_api tests for the form-field forwarding + on_progress callback wire-up + 404 short-circuit, 2 frontend SliceJobTracker tests pinning the new toast format and the no-progress fallback. New i18n keyslice.runningWithProgresslocalised across all 8 UI languages (English + German fully translated, the six others seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). Graceful when the sidecar lacks--pipesupport (tested live: OrcaSlicer 2.3.2 + BambuStudio 02.06.00.51 both work; older sidecars without the new endpoint return 404 and the toast cleanly degrades to elapsed-time-only). - No visual indicator while a slice job was running — users couldn't tell if a long slice was still progressing or had hung — Previously SliceJobTrackerProvider emitted one transient toast on enqueue ("Slicing X in the background…") and one on completion ("Sliced X"), with nothing in between. For large multi-color models that take 30s–several minutes to slice, the start toast auto-dismissed after 3s and left a UX dead zone where users would ask "is it still slicing?". The tracker now opens a persistent
slice-job-{id}toast with a spinner that updates every second showing elapsed time + phase ("Queued: X — 4s" → "Slicing X — 47s"), then is replaced by the existing transient success/error toast on terminal state. Polling cadence (1.5s) is unchanged — a separate 1Hz tick re-renders just the elapsed-time counter so the toast stays smooth even if the backend is slow to respond. Time format compresses gracefully past 60s ("1m 5s") and 60m ("1h 12m"). 4 new unit tests inSliceJobTrackerContext.test.tsxcovering: persistent toast renders at t=0 (no wait for first tick), elapsed time updates each second while running, success completion replaces persistent with transient "Sliced X", failure replaces with transient error toast carrying the sidecar'serror_detail. New i18n keysslice.queuedToast/slice.runningToastlocalised across all 8 UI languages (English + German fully translated, the six others seeded with English copies pending native translation, matching the project's existing flow for newly-added user-facing features). - MakerWorld URL-paste resolver listed plate instances without showing which printer each was sliced for — MakerWorld's
/instances/hitsendpoint omits the per-instance compatibility info that lives ondesign.instances[].extention.modelInfo(compatibility= primary printer the instance was sliced for,otherCompatibility= additional printers the uploader marked it compatible with), so every instance row in the resolved-design preview looked identical and users blindly picked the first one regardless of whether it matched their printer — leading to "I downloaded the H2D version and got A1 g-code" complaints. The resolve route now joins both endpoint payloads by instance ID and forwards both fields onto each hit; the MakerWorld page renders "Sliced for {primaryPrinter}" + (when present) "Also marked compatible: ..." per instance row. Backend tests intest_makerworld_routes.py::TestResolvecover the merge happy path (compatibility lists land on the right hits) and the "missing modelInfo" fallback (older designs / hits without a matching design.instances entry don't crash the response, just lose the optional fields). New i18n keysmakerworld.slicedFor/makerworld.alsoCompatiblelocalised across all 8 UI languages. - Moving a file to an external folder updated the DB row but never wrote the bytes to the mount (#1112 follow-up — confirmed by Carter3DP after testing 0.2.4b1) — Carter's report read "the file appears in Bambuddy but not physically on the external folder", which traced to
move_filesonly updatingfile.folder_idin the DB while leaving the bytes in the internallibrary_files_dir. Direct upload to a writable external folder was already fixed in 0.2.4b1; the move path was not. Cross-boundary moves now physically relocate the bytes through a new_move_file_byteshelper. Same-boundary moves (managed → managed) keep the existing DB-only fast path because the file's on-disk location doesn't depend on which managed folder owns it. The helper handles four flows: managed → external (copy bytes to<external_path>/<filename>, flipis_external=True, store the absolute path, unlink the managed source), external → managed (copy bytes into internal storage with a fresh UUID name, flipis_external=False, store the relative path, unlink the external source, recomputefile_hashsince scan-tracked rows historically carryfile_hash=None), external → external (same as managed → external), and managed → managed (DB-only). Copy-then-unlink ordering means a partial copy followed by a failed unlink leaves both copies on disk rather than losing the source if the target write fails halfway through on a flaky NAS mount. Failedshutil.copy2cleans up partial dest before raising. Defence-in-depth checks block: source on a read-only external mount (move = delete-on-source which a RO mount can't fulfil — would copy-then-fail-to-unlink and silently duplicate the file), filename collisions on the target mount (won't silently overwrite a file the user already has on the NAS), traversal-style filenames afterPath.resolve(), missing source on disk, andos.access(W_OK)on the target mount. Each skip carries a structured{file_id, code, reason}entry in a newskipped_reasonsfield on the response so the UI can surface "5 of 10 files skipped: 3 had filename collisions on the NAS, 2 are no longer on disk" instead of a blank "skipped: 5". The original{moved, skipped}numeric counters are preserved so existing frontend code that only reads those keeps working unchanged. Six new integration tests intest_external_folders_api.py::TestCrossBoundaryMovecovering: managed → external relocates bytes (the actual #1112 fix — bytes land on mount, internal source removed, DB row matches reality), external → managed relocates bytes (symmetric path including hash recompute), name collision on target external mount skips withcode: "name_collision"and leaves the pre-existing target file intact, source on read-only external mount skips withcode: "source_readonly", managed → managed stays DB-only (file_path doesn't change, no shutil.copy), andskipped_reasonsis always present (empty list when nothing skipped) so frontend code can treat it as the source of truth without optional-chaining. bambuddy.logfilling withException terminating connection ... CancelledError+database is lockedcascades on long uploads (#1112 follow-up, surfaced by Carter3DP's support package) — Two-part fix to a single root cause: Starlette'sBaseHTTPMiddleware(which FastAPI'sapp.middleware("http")decorator uses under the hood) cancels the inner task scope when a client disconnects mid-request — common on long multipart uploads where the client times out before the server's response. Pre-fixget_dbonly caughtException, butCancelledErroris aBaseException, so cancellation skipped the rollback path entirely; the SQLite write lock stayed held until the connection was eventually GC'd, producing the(sqlite3.OperationalError) database is lockedcascade againstruntime_secondsupdates and other tight-loop writers in Carter3DP's log. Postgres users would see pool exhaustion / "QueuePool limit overflow" instead of file-level lock contention, but the leak shape is identical. (1)get_dbnow catchesBaseExceptionsoCancelledErrortriggers rollback, and wraps bothrollback()andclose()inasyncio.shieldso the cleanup completes even when the await itself is being cancelled by the same cancel scope. The SQLite write lock is released promptly; the connection returns to the pool instead of leaking until GC. (2) ACancelledPoolNoiseFilter(newlogging_filters.pyfilter, attached tosqlalchemy.pool) drops the residual log noise that pre-existing pools still emit during their own cleanup — both theException terminating connection ... CancelledErrorrecords (matched on prefix + cancellation-drivenexc_info, including chained__cause__/__context__) and the symptomaticgarbage collector is trying to clean up non-checked-in connectionrecords. Real pool problems — broken connections, network hiccups, exhaustion — keep flowing because they carry a different exception chain or a different message prefix; verified bytest_keeps_terminate_with_real_oserrorandtest_keeps_unrelated_pool_message. 13 new regression tests acrosstest_get_db_cancel_safety.py(commit on clean exit, rollback on regularException, rollback onCancelledError— the actual #1112 fix, close runs even if rollback raises, close failure on clean exit doesn't propagate, both rollback + close go throughasyncio.shield) andtest_cancelled_pool_filter.py(drops cancellation-driven terminate, drops GC-cleanup, keeps realOSErrorterminate, keeps terminate withoutexc_info, keeps unrelated pool messages, drops chained-causeCancelledError, defensive guard against self-referential cause chains). Applies to SQLite and PostgreSQL —get_dbis dialect-agnostic and the filtered messages come from basesqlalchemy.poolnot from any specific dialect.- Windows install:
bambuddy.logfilling withWinError 10054 — _ProactorBasePipeTransport._call_connection_losttracebacks (#1113, reported by cadtoolbox) — Cosmetic-but-noisy. When a printer / MQTT broker / camera RSTs a TCP socket instead of FINing it (offline X1Es in cadtoolbox's setup, network gear that drops idle TCP, the printer firmware's own watchdog), Windows asyncio's Proactor cleanup path triessocket.shutdown(SHUT_RDWR)on the already-dead socket and hitsWinError 10054. Application-layer reconnect logic (paho-mqtt, httpx) handles the actual disconnect fine — paho retries, MQTT comes back, telemetry resumes — so the traceback is pure asyncio bookkeeping noise, but it fired multiple times per minute on cadtoolbox's 9-printer setup with 5 offline X1Es and was the first thing in the sanitized log. Adds a customloop.set_exception_handler(newbackend/app/core/asyncio_handlers.py) installed on Windows only that pattern-matches the specific_call_connection_lostcleanup-RST signature (three signals together:sys.platform == "win32", the exception isConnectionResetError, and the asyncio message string contains_call_connection_lost) and downgrades it to DEBUG. RealConnectionResetErrors raised inside application coroutines (different message string) and other Proactor cleanup errors (BrokenPipeError,ConnectionAbortedError— same callback site, distinct signal worth keeping visible) all pass through toloop.default_exception_handlerunchanged. Linux / macOS use the Selector event loop and never hit this codepath, soinstall_proactor_reset_filter()is an explicit no-op there with aFalsereturn — verified bytest_install_is_no_op_on_non_windows. 9 unit tests intest_asyncio_handlers.pycover: discriminator matches the exact reported signature, rejects unrelatedConnectionResetErrors, rejectsBrokenPipeErroreven on the same callback site, rejects when no exception object is present, install is platform-gated, install wires the handler onto the loop, suppression doesn't reach the default handler, and unrelated exceptions still hit the default handler. Wired fromlifespanstartup before any task can spawn that might trip it. - Auto-Print G-code Injection: start snippet landed before printer startup, and
{placeholder}substitution was silently broken (#422 follow-up) — Two compounding bugs surfaced by pleite (Swapmod) and DevScarabyte (multi-height test prints) on the initial #422 ship: (1) Start snippets were prepended to the entireplate_X.gcodecontent, which placed them before the printer's bed-heat / homing / nozzle-prime sequence — so a Swapmod start snippet that assumed nozzle-at-temp ran on a cold printer. The injection now anchors at; MACHINE_START_GCODE_END(the marker sitting at the bottom of every Bambu/Orca slicer'sMACHINE_START_GCODEblock, afterM109wait-for-temp), matching where a slicer-side custom-start-gcode would land. Files without the marker (older slicer versions) keep the prepend behaviour as a fallback with a warning log. (2) Slicer-style placeholders likeG1 Z{max_layer_z} F600were written verbatim to the output gcode — the printer firmware then parsedZ{max_layer_z}asZ1and crashed the head into the print on a 60mm-tall model (a real safety issue: prints damaged, top glass + AMS pushed up off the printer when the model was taller than the hard-coded park height). Added a header parser that reads the 3MF's; HEADER_BLOCK_START..ENDblock (lowercased keys,[units]suffix stripped, spaces → underscores) and a Prusa-style{name}substitution pass that runs over both start and end snippets before injection. Supported placeholders:{max_layer_z}/{max_print_height}(top-layer Z),{total_layer_number}/{total_layers},{total_filament_weight},{total_filament_length}, plus any other normalised header key from the source file. Unknown placeholders are left in the snippet verbatim with a warning log — a typo never silently expands to an empty string and the firmware never receives a malformedZparameter. 16 new regression tests intest_gcode_injection.pycovering: start snippet anchored to the marker (printer startup runs first, snippet sits betweenM109 S220and the marker, file head untouched), missing-marker fallback path, end snippet still appended at EOF,{max_layer_z}resolved through the alias map, direct-key substitution from the normalised header, unknown-placeholder pass-through, and direct unit tests for each new helper (_parse_3mf_gcode_header,_substitute_placeholders,_inject_start_at_marker). Wiki page documents the supported placeholder list with a safety warning specifically calling out{max_layer_z}for park moves. - Camera page ignored
?fps=NURL parameter (#1131 diagnostic) —CameraPage.tsxhard-codedfps=15in the stream URL and never read the URL query string, so/camera/1?fps=5(and similar diagnostic suggestions for the freeze report) were silent no-ops. The siblingStreamOverlayPagealready honoured?fps=correctly; the bug was thatCameraPagewas the gap. Now readssearchParams.get('fps')viauseSearchParams, parses it, falls back to 15 on missing/non-numeric, clamps to the backend's 1–30 range, and threads the resulting value into the stream URL. Backendgenerate_rtsp_mjpeg_streamalready accepted the parameter and re-clamps per-model (chamber-image A1/P1 capped at 5, RTSP capped at 30). 5 new regression tests inCameraPage.test.tsx::fps URL parameter (#1131)cover default-15, honoured value, clamp-above-30, clamp-below-1, and non-numeric fallback — same matrixStreamOverlayPage.test.tsxalready pins. Independent of the underlying freeze investigation in #1131; surfaced while triaging that report. - Reprint-from-archive failed with
0500_4003SD R/W errors after a stuck dispatch, fixable only by restarting the container (#1136) — Reported by smandon: reprinting from archives sometimes fails immediately with MicroSD R/W exception errors, with the printer's MQTT push referencing a 3MF file from a different unrelated archive (WARIO_Wall_decor_-_NO_AMS.3mfwhile the user was actually trying to printCable_Organiser_Cable_Clip.3mf). Once it starts happening, every subsequent reprint hits the same error until the container is restarted. Root cause traced from his support package log to paho-mqtt's client-side QoS 1 queue: when the printer's command channel goes half-broken (telemetry still flowing, publishes silently dropped — same #887/#936 pattern), Bambuddy's 15s dispatch deadline expires (background_dispatch.py:993) and callsforce_reconnect_stale_session(). That function was force-closing the underlying socket so paho's auto-reconnect would kick in — but the samemqtt.Clientinstance, sameclient_id, and same in-process QoS 1 queue stayed alive across the reconnect. Any unacked publish from the broken session — typically the just-sentproject_filefor the new archive — got replayed verbatim on the new connection. And because the in-process queue accumulates across multiple stuck dispatches within one Python process, by the second or third stuck reprint there were several staleproject_file/resume/stop/clean_print_errorcommands queued up and replaying together. The printer received the flood, tried to load whichever stale path the firmware latched onto last, found a file that no longer existed on its SD card →0500_4003. Container restart was the only thing that fixed it because it was the only thing that wiped paho's in-process queue. Replaced the socket-close with a context-aware reconnect:force_reconnect_stale_session()andcheck_staleness()now go through a routing helper_reset_client_for_reconnect()that picks the right teardown strategy based on caller context. Async-context callers (the dispatch deadline path —background_dispatch.py:993— which is the actual #1136 trigger, plus FastAPI route handlers viacheck_staleness) get the hard-reset path:client.disconnect()(broker sees DISCONNECT and drops the session immediately, sinceclean_session=True),client.loop_stop()(kills the paho network thread, taking its QoS 1 queue with it), nulls outself._client, and callsself.connect()to construct a freshmqtt.Clientwith an incrementedclient_id. New connection starts genuinely empty, no replay possible. Paho-network-thread callers (the developer-mode probe andams_filament_settingzombie detection inside_update_state, lines ~2604 and ~2623) keep the socket-close fallback — callingloop_stop()from inside the network thread would self-join and deadlock, so the safe pattern there remains "close the socket and let paho's own loop detect it and auto-reconnect on the same client". Theoretical queue replay is still possible on those paths but #1136 specifically traced through the dispatch path, and the legacy socket-close has been battle-tested for the zombie paths since #887. Routing decision is made viaasyncio.get_running_loop()— paho's callback thread has no loop, every legitimate hard-reset caller does. 7 regression tests across two new test classes:TestForceReconnectRouting(3 tests pinning the sync-context → socket-close fallback, async-context → hard-reset path with mock-stubbedconnect(), and the state-disconnected broadcast firing once on either path) andTestHardResetClientDirect(3 tests pinning the helper directly: old client receivesdisconnect()+loop_stop(),_clientreference cleared, failingdisconnect()doesn't propagate so the await chain inbackground_dispatch.pydoesn't break). ExistingTestZombieSessionDetection::test_two_timeouts_force_reconnectandTestDeveloperModeProbeTimeout::test_second_timeout_forces_reconnectupdated to assert the socket-close path (matching their paho-thread context), preserving the legacy contract. All 2179 backend unit tests pass. Thanks to smandon for the precise reproduction logs that made this diagnosable from a single support package. logs/bambuddy.logwas silently dropping records from named child loggers — When the trace-ID column was added to the log format (%(trace_id)s), theTraceIDFilterwas attached to the root logger. Per Python's logging semantics, a filter on aLoggeronly fires for records that originate at that logger — records propagated up from child loggers (everybackend.app.*module — most of the application) never trigger it. Result: child-logger records arrived at the file handler with notrace_idattribute, the formatter raisedKeyError: 'trace_id', andHandler.handleErrorprinted to stderr and dropped the record.bambuddy.logended up with INFO/DEBUG records appearing only "partially" — exactly the records emitted directly throughlogging.info(...)(root logger) oruvicorn.access(which had its own explicit filter attachment) made it; everything else was discarded. Moved_trace_id_filterfromroot_logger.addFilter()toconsole_handler.addFilter()+file_handler.addFilter()— handler-level filters fire for every record the handler receives, regardless of which logger emitted it. The filter's own docstring already said "Attach to the file handler (or any handler whose format string references%(trace_id)s)" — the implementation was just wrong. New regression test intest_trace.py::TestFilterMustBeAttachedToHandlerNotLoggerpins the contract: a child logger emits a record, propagation reaches the handler-level filter, the formatter sees a populatedtrace_idfield, and the line is written. Existing 23 trace tests keep passing unchanged. Restart-shutdown recursion in journalctl was also a side effect — every shutdown log line was raising the formatterValueError, which got caught and logged… raising again, forever, until the lifespan exit unwound; the new placement breaks the cycle since records now format cleanly.- User-cancelled prints surfaced as "1 problem" on the printer card AND were archived as "Layer shift" failures — Cancelling a print left the printer card stuck on a permanent "1 problem" badge, and stamped the resulting archive entry with
failure_reason="Layer shift"— a fake firmware-fault label in the print history. Affects every Bambu printer that emits a cancel-sequence HMS — the user surfaced it on an H2D where the firmware emits both0300_400C("The task was canceled.") and the not-in-the-public-wiki0C00_001Becho as part of the cancel sequence. Four compounding causes, all fixed together. (1) The direct stop endpoint never set the user-stopped flag.POST /printers/{id}/print/stop(backend/app/api/routes/printers.py) sent the MQTT stop command but didn't callmark_printer_stopped_by_user(), so when the printer reported "failed" via MQTT the on_print_complete override (main.py:2558) couldn't reclassify it as "cancelled". The same flag was being set fromPOST /print-queue/{id}/stop, which is why queue-driven cancels mostly worked but printer-card cancels didn't. The direct endpoint now mirrors the queue path. (2) The HMS → failure_reason heuristic was way too broad. Old code mapped any module 0x0C HMS to "Layer shift" (main.py:3072), but module 0x0C is "Motion Controller" — covers cameras, visual markers, the BirdsEye assembly and the cancel-sequence HMS the firmware emits during a user-cancel. Real layer-shift codes actually live in module 0x03 (0300_4057,0300_4068,0300_800C). The same module-only heuristic was also being used to auto-label "Filament runout" (any 0x07) and "Clogged nozzle" (any 0x05), so the same false-positive class existed on those branches. Replaced the broad module heuristic with a curated short-code → reason map (_HMS_FAILURE_REASONS, 23 specific HMS codes from the real wiki); anything not in that map leavesfailure_reason=Nonerather than guessing. Also extracted the logic into a pure functionderive_failure_reason(status, hms_errors)so it's unit-testable without the full archive pipeline. (3) Cancel-echo HMS codes were pollutingstate.hms_errors. Even with (1) and (2) fixed, the printer card kept showing "1 problem" because the firmware kept reporting0300_400C("The task was canceled.") in subsequent MQTT pushes — andbambu_mqtt._update_statewas happily appending it tostate.hms_errors, where the frontend'sfilterKnownHMSErrorsaccepted it as a valid known code (it IS inERROR_DESCRIPTIONS— just describing a user action, not a fault). Added a parse-time filter (_HMS_USER_ACTION_CODES = {"0300_400C", "0500_400E"}) that drops these short codes before they ever enter the state, mirroring the suppressionmain.py:_HMS_NOTIFICATION_SUPPRESSwas already doing for notifications. The card pip, the "X problem" badge, the modal, and any other consumer ofhms_errorsall get consistent behavior automatically. (4) Frontend countedgcode_state="FAILED"without HMS as a problem. Even with (1)–(3) fixed, the printer card still showed "1 problem" because the H2D'sgcode_statesits atFAILEDafter a cancel until the next print starts, andPrintersPage.tsx:940(header badge) +classifyPrinterStatus(line 1028) +BulkPrinterToolbar.tsx:102all unconditionally bumped theerrorbucket oncase 'FAILED'. Real failures attach an HMS error; user-cancels don't — so FAILED-without-HMS now buckets asfinished(same operator meaning: print ended, plate may need clearing) and only escalates toerrorwhen there's an active known HMS. Same change applied across all three call sites for consistency. 20 regression tests total across three files:test_failure_reason_derivation.py(11 tests pinning the cancel-sequence HMS pair to NOT yield "Layer shift", unknown module-0x0C → None, real layer-shift/runout/clog codes still classify, int-vs-hex code-format tolerance,status="cancelled"symmetric with"aborted"),test_bambu_mqtt.py::TestHMSUserActionFiltering(4 tests pinning0300_400C/0500_400Efiltering on bothhms[]andprint_errorparse paths, real layer-shift0300_4057still passes through, mid-cancel concurrent real-fault keeps the real one and drops only the echo), andPrintersPageBucketing.test.ts(5 tests pinning FAILED-without-HMS → finished, FAILED-with-known-HMS → error, FAILED-with-only-unknown-HMS → finished, FINISH baseline unchanged, disconnected stays offline). Existing stale state on running printers clears on the next MQTT push that includes anhmskey (printer firmware re-sends the list, parser filters it out, badge clears). Users with a stuck badge can also click the HMS modal "Clear" button to clear immediately via MQTT command. - Settings → API Keys: deleted key stayed on screen until manual reload — the delete-key mutation marked the
['api-keys']query stale viaqueryClient.invalidateQueries, which in v5 should also refetch active queries — but in practice the deleted row remained visible until the user reloaded the page. Switched the mutation'sonSuccesstoqueryClient.setQueryDataso the deleted key is filtered out of the cache synchronously the moment the API confirms; no refetch round-trip required, no chance for an invalidation→refetch race to leave the UI stale. Create-path keepsinvalidateQueriessince that one was working correctly. NewSettingsPage.test.tsxtest "removes a deleted key from the list without a page reload" pins the synchronous-removal contract. - SpoolBuddy AMS page: re-assigning a just-unassigned spool sometimes showed an empty picker (#1133 follow-up) — Reported live during the rollout of the #1133 picker change: unassigning a Bambu PLA Metal spool from SpoolBuddy and re-opening the picker showed "no spools available" — the just-freed spool was missing. The investigation surfaced four distinct causes that all needed addressing for the picker to stay correct, plus a deployment-side cause that prevented any of the fixes from reaching the live kiosk. (1) Dual cache-key shapes for spool assignments:
SpoolBuddyAmsPagekeys by['spool-assignments', selectedPrinterId]while the sharedAssignSpoolModalkeys by['spool-assignments'], andSpoolBuddyAmsPage.unassignMutation.onSuccessonly invalidated the printerId-keyed one, leaving the modal's unkeyed cache stale. Both invalidate calls (mutation success + modal-close handler) now hit both keys; collapsing the two key shapes into one is intentionally deferred since the dual-key pattern predates this change and shows up in 6 components. (2) Toggle wasn't a real escape hatch: the existing "Show all spools" toggle's label said it would help when a spool was hidden but only bypassed the material/profile filter, not the assignment-elsewhere gate. It now bypasses BOTH filters, making it a real escape hatch (the backend'sassign_spoolis upsert-per-(printer/ams/tray), so picking a currently-taken spool just creates a second assignment row — foot-gun for normal flows but exactly the recovery path this toggle is for). (3) Cross-component cache pollution:['inventory-spools']was used as a query key by 5+ components callinggetSpools()with differentincludeArchivedarguments — React Query treated them as one query and served whichever response landed first, so a SpoolBuddy component priming the cache withgetSpools(false)could hide spools from the modal that wasn't yet present at that fetch time. The modal now uses its own dedicated key['inventory-spools', 'assign-modal']+getSpools(true)so it's never at the mercy of someone else's cache state. (4) Empty-state had no diagnostic surface: when the picker showed "No spools available" there was no way to tell why — was the fetch empty? Were spools archived? All assigned elsewhere? A small counterX fetched · Y archived · Z assigned to other slotsnow renders in the empty state so future reports of this kind are immediately answerable from a screenshot rather than requiring devtools digging. (5) Browser holding stale JS forever:index.htmlwas being served withoutCache-Controlheaders, so Chromium's heuristic-cache freshness window kept the OLD HTML "fresh" for days across browser restarts. The OLD HTML referenced an OLD content-hashed bundle, which was also still in disk cache, so the kiosk kept running pre-deploy JS no matter how many times its Chromium was restarted or cache-cleared — the persistent profile would re-seed the cache from disk on next start. Backend now sendsCache-Control: no-cache, must-revalidateon both/and the SPA catch-all that serveindex.html; service workerCACHE_NAMEbumped frombambuddy-v25tobambuddy-v26so any client that does eventually re-fetchsw.jsinvalidates its CacheStorage; andspoolbuddy/install/install.shnow generates the kiosk launcher with--user-data-dir=/tmp/spoolbuddy-kiosk-userdataplus a pre-launchrm -rfso every kiosk restart starts from a clean slate (the kiosk has no per-user state worth persisting — auth token is in the URL query, not a stored cookie). 6 net-new tests acrossAssignSpoolModal.test.tsx(toggle escape-hatch behavior) andtests/integration/test_static_html_cache_headers.py(Cache-Control directive on root + SPA catch-all routes, no leak onto API routes). Reproduced end-to-end on an H2D + dual AMS + SpoolBuddy display: unassign Bambu PLA Metal Iridium Gold Metallic from slot B4 → reopen picker → spool now visible without browser intervention. - Plate-clear button stayed visible after the API cleared
awaiting_plate_clearoutside the printer-card click path (#1128) —awaiting_plate_clearis a Bambuddy-side flag, not a printer-side one, so toggling it does not produce an MQTT push from the printer. Commit 4e86e8c added the flag to theprinter_statuspayload so MQTT-driven broadcasts (e.g. when a print finishes and on_print_complete sets the flag to True alongside a state transition to FINISH) carry it correctly. The reverse transition didn't get the same treatment:POST /printers/{id}/clear-platemutatedPrinterManager._awaiting_plate_clearand persisted to the DB, but emitted noprinter_statusWebSocket update — and the in-main.pystatus-change broadcaster'sstatus_keydeduplication intentionally excludes Bambuddy-side flags, so even a coincidentally-arriving MQTT push wouldn't reflect the change. The "Mark plate as cleared" button on the printer card disappeared "immediately" after a click only because the React Query cache was being optimistically updated client-side; clearing the flag through any other route (an admin script, a second tab, an automation hitting the endpoint directly, the scheduler atprint_scheduler.py:1844when dispatching the next queued print) silently left every UI subscriber but the originating tab stale until a coincidental status refresh. Centralised the broadcast inPrinterManager.set_awaiting_plate_clearitself rather than at each call site, so every current AND future caller is covered without remembering to wire it up: a new_broadcast_status_change(printer_id)private coroutine is scheduled alongside the existing_persist_awaiting_plate_clearwhenever the flag flips under a running event loop. The broadcast lazy-importsws_managerto keepprinter_manager.pyclean of application-layer infra at module-import time, short-circuits whenget_statusreturnsNone(printer disconnected — the next reconnect produces a fresh push anyway), and swallowsws_manager.send_printer_statusfailures so the persistence path can complete even if the WS layer is temporarily unavailable. The same hook is now in place for any other Bambuddy-side flag that gets added toprinter_state_to_dictlater — they'll all need to broadcast their own changes for the same reason. 8 new regression tests intest_printer_manager_status_broadcast.py: schedules-on-True/False/loop-running/no-loop/loop-stopped contracts,_broadcast_status_changehappy path with payload assertion, skip-when-no-state, swallow-WS-errors, and an end-to-end live-loop test that firesset_awaiting_plate_clear(False)and asserts a broadcast lands withawaiting_plate_clear: falsein the payload. Existing 24 tests intest_scheduler_clear_plate.pycontinue to pass unchanged because they instantiatePrinterManager()without attaching a loop (sync unit-test path) — the new_schedule_asynccall short-circuits on the same loop check the existing persistence call already used. Thanks to EdwardChamberlain for the precise root-cause analysis (down to the exact line and the suggestedws_manager.send_printer_status()fix). - Uvicorn HTTP access log was missing from
bambuddy.log, leaving rogue server-state changes untraceable — When an HTTP endpoint that mutates server state fires unexpectedly (the canonical example: a print spontaneously stopping mid-job because something hitPOST /printers/{id}/print/stop), the only on-disk trail was Bambuddy's own application log — which by design only records the outbound MQTT publish (Sent stop print command), not the inbound HTTP call that triggered it. The result was an unsolvable mystery on 2026-04-26: prints stopping with no preceding Bambuddy-side log line, no way to identify the caller, and the rotated container stdout already gone by the time the support pack was generated. Root cause: uvicorn ships itsaccesslogger withpropagate=Falseby default, so the existingRotatingFileHandlerattached to root never received those records.main.pynow attaches the same file handler directly tologging.getLogger("uvicorn.access")and applies a newWriteRequestsOnlyFilter(backend/app/core/logging_filters.py) that keepsPOST/PUT/PATCH/DELETEand dropsGET/HEAD/OPTIONS. Status polls, camera streams, snapshot fetches, websocket upgrades, and CORS preflights account for the bulk of access traffic on a running install and none of them can change server state on their own — dropping them keepsbambuddy.logfocused on lines that matter for incident triage without churning the 5 MB rotation window faster than it's useful. Filter anchors on the"+verb+pattern uvicorn's format string guarantees, so a literal"POST"substring inside a URL (e.g.GET /api/posts/POST_123) cannot false-match. The filter lives in its own module so the test suite can import it without pulling inmain.py's entire startup graph. 13 new tests intest_logging_filters.pycover all four write verbs being kept, GET/HEAD/OPTIONS being dropped, two URL-contains-verb-substring false-match guards, empty/unrelated-line/idempotency edge cases. Output now looks like2026-04-26 09:23:14,690 INFO [uvicorn.access] 192.168.1.42:54812 - "POST /api/v1/printers/1/print/stop HTTP/1.1" 200— onegrep "POST.*stop"away from "who triggered this". - Spool auto-assign hit
IntegrityErroron Postgres when AMS pushes arrived in quick succession — Bambu MQTT can deliver twoams_datapush frames for the same printer ~30 ms apart (observed on H2D + dual AMS at K-profile-load / RFID-read boundaries). Each frame triggerson_ams_changeinbackend/app/main.py, whose auto-assign block reads(printer_id, ams_id, tray_id), decides "no existing assignment", and INSERTs viaauto_assign_spool— and the two callbacks raced in their respective sessions, both deciding to insert, with the second commit losing onspool_assignment_printer_id_ams_id_tray_id_key. SQLite's WAL serial-write semantics had been silently swallowing the race for ~7 weeks since the spool-assignment feature shipped (latent inec82092b); when optional Postgres support landed in610431d6and asyncpg started allowing true concurrent transactions, it surfaced asWARNING [main] RFID spool auto-assign failed: ... duplicate key value violates unique constraint ...; DETAIL: Key (printer_id, ams_id, tray_id)=(1, 0, 0) already exists. Added a per-printerasyncio.Lock(_ams_assignment_lockskeyed byprinter_id) wrapping the auto-assign critical section so two callbacks for the same printer serialise — by the time the second one's session runsselect(SpoolAssignment).where(...), the first's commit is visible and the early-return "existing assignment" branch fires instead of a duplicate INSERT. The Spoolman sync block further down in the same callback intentionally stays OUTSIDE the lock — it's network-bound and idempotent, so serialising it would block subsequent AMS callbacks for the duration of a remote roundtrip. Per-printer scope keeps unrelated printers fully parallel: one printer's slow assignment never blocks another's. The auto-unlink block above the assign block isn't wrapped because its DELETE/UPDATE operations don't have the same constraint surface; the assign-block lock is sufficient because the second callback'sselectwill see the first's committed state. 5 new regression tests intest_ams_assignment_lock.pycover same-printer-same-lock identity, different-printers-different-lock isolation, second acquirer waits for first inside the lock (proves serialisation), different printers run truly in parallel under a held lock (proves per-printer scope), and an auto-cleanup fixture resets the module-level dict between tests so cross-test loop affinity bugs can't surface. - Camera TLS proxy logged "Unhandled exception in client_connected_cb" when ffmpeg dropped its half of the connection mid-stream under uvloop — The bidirectional forwarders inside
services/camera.py::create_tls_proxy._handle(the OpenSSL TLS shim added in #661 so Bambu's RTSPS handshake works around Debian GnuTLS hardening) caught(ConnectionError, OSError, asyncio.CancelledError)on writes, but uvloop'sUVStream.writeraises a plainRuntimeErrorfromUVHandle._ensure_alivewhen the underlying handle is already closed. asyncio's default selector loop reports the same situation asConnectionResetError, so the bug only surfaced on uvloop deployments — and only at the moment the client (typically ffmpeg or a snapshot-capture subprocess) tore down its socket while the proxy was mid-flush. TheRuntimeErrorslipped past the except tuple, escaped the forwarder coroutine, and asyncio'sclient_connected_cbtask-exception handler logged a noisy multi-line traceback ending inRuntimeError: unable to perform operation on <TCPTransport closed=True ...>; the handler is closed. AddedRuntimeErrorto the except tuple in both_fwd_to_serverand_fwd_to_client(the latter being the actual frame in the bug report — server→client is where buffered TLS chunks land after the client has gone). The forwarders are intentionally fire-and-forget on tear-down; once either peer drops, both halves of the proxy should exit quietly and the existingdst.close()in thefinallyblock already handles cleanup. No functional regression possible — the connection is already dead by the time the exception fires; this only changes whether asyncio logs an "Unhandled exception" trace for it. 2 new regression contract tests intest_camera_tls_proxy.pyuseinspect.getsourceto assert both forwarder closures' except clauses includeRuntimeError, since the closures are nested inside_handleand extracting them just for testability would require a pure-cosmetic refactor of the proxy. - Background-dispatch reported "Print started successfully" when the printer never actually transitioned (#1134, follow-up to #1042) — The int32
task_idmodulo fix that was the original root cause of #1042 is verified working in the reporter's most recent support pack (the publishedtask_idvalues are well below 2^31-1 and match theint(time.time() * 1000) % 2_147_483_647formula exactly). The remaining residual — "the UI reports despatch success which is slightly misleading" — was a real second bug class: the post-dispatch watchdog_verify_print_responseinservices/background_dispatch.pywas fire-and-forget. It would correctly detect that the printer never transitioned (e.g. P1S sitting ingcode_state: FAILEDwith HMS0300_400C"task was canceled", a half-broken MQTT session, an SD card error, or any other pre-print blocker), log adid not respond to print command within 15swarning, force-reconnect the MQTT session — and then return without touching the dispatch job state. The dispatch job had already been marked successful on the optimistic MQTT-publish-acknowledged path, so the UI carried on showing "Print started successfully" while the printer sat idle. The watchdog now returns abooland is awaited inline by both call sites (_run_reprint_archiveat line 687,_run_print_library_fileat line 860); onFalse(timeout) the call sites raise aRuntimeErrorcarrying a user-actionable message ("Printer did not acknowledge print command — state still {pre_state}. Check the printer for a pending error (HMS code, plate-clear prompt, SD card) and try again."), which routes through the existing_mark_job_finished(failed=True, …)path so the dispatch UI shows a real failure toast and the library-file flow's freshly-created archive isdb.rollback()'d (no orphan rows for prints that never started). The watchdog now also acceptssubtask_idadvancing past the capturedpre_subtask_idas a definitive "command landed" signal — same as the queue-side watchdog atprint_scheduler.py:1992(#1078) — so slow H2DFINISH→PREPAREtransitions (~50 s observed) don't false-fail when the printer has clearly accepted the project_file but is still in FINISH. Default timeout raised from 15 s to 90 s to match the queue-side watchdog (#967 / #1078) and give the same headroom on both dispatch paths. Brief mid-window MQTT disconnects (get_status() is Nonefor one tick) now keep polling instead of immediately failing — matches what the queue watchdog already does and avoids false-failing on transient telemetry gaps. The existingforce_reconnect_stale_sessionrecovery is preserved on the timeout path. 8 new regression tests intest_background_dispatch_watchdog.pycover state-change pickup, subtask_id-change pickup with state still FINISH (the H2D case), neither-signal-changed timeout + force-reconnect, pre_subtask_id=None backwards-compat, post-dispatch subtask_id=None not counting as a change (avoids false-pass on transient reconnect), brief disconnect not short-circuiting the window, persistent disconnect for the full window returning False, and a contract test that the default timeout is 90 s. Thanks to EdwardChamberlain for the detailed retest with logs that pinpointed the watchdog's no-propagation gap. - Bambu RFID auto-match created duplicate inventory rows for Quick-Add and non-Bambu-branded spools (#918) —
find_matching_untagged_spoolis supposed to attach a Bambu RFID UID to a pre-existing manually-logged spool of the same material/color so users who log inventory before scanning don't end up with a duplicate row on first AMS read. Two bugs in the matcher meant it almost never worked for the actual reporting workflow: (1) the subtype filter was strict — when the AMS tray reportstray_sub_brands="PLA Basic"the matcher requiredSpool.subtype = 'Basic'exactly, so any Quick-Add row (Quick-Add only requiresmaterial, leavingsubtype=NULL) was excluded and duplicated on first AMS read. (2) the docstring claimed it filtered on brand but the WHERE clause didn't, so a same-color Polymaker untagged spool would silently acquire a Bambu Lab tray UUID, leaving the user withbrand="Polymaker"but a Bambu UUID — silent data corruption. Both bugs are addressed in the same query: subtype now prefers an exact match but accepts a NULL-subtype row as fallback (with aCASEinORDER BYso an exact match still wins when both exist), and brand is now restricted to "contains 'bambu' (case-insensitive)" or NULL — matching'Bambu'(the form'sDEFAULT_BRANDSvalue),'Bambu Lab'(the catalog value),'BambuLab','bambu lab', etc., while rejecting any explicitly-named third-party brand. 6 new regression tests intest_spool_tag_matcher.pycover the NULL-subtype fallback, exact-subtype-wins-over-NULL ordering, non-Bambu brand rejection, NULL brand acceptance, all four Bambu brand spelling variants, and the full Quick-Add scenario (brand=NULL+subtype=NULL). The broader UI proposals in #918 (manual override / merge / disambiguation prompt) are intentionally out of scope — once the matcher works, the duplicate-on-RFID complaint that motivated those proposals goes away. Thanks to ViridityCorn for the report and pointing at the right function, and to Arn0uDz for confirming with a 20-spool repro. - Swagger UI link in Settings → API Keys rendered a blank page — the global CSP applied by
security_headers_middlewaresetscript-src 'self'andstyle-src 'self' 'unsafe-inline' https://fonts.googleapis.com, which blocked both the inline<script>that boots Swagger and thecdn.jsdelivr.netURL that shipsswagger-ui-bundle.js/swagger-ui.css. FastAPI's/docspage therefore loaded a 1 KB shell with no JS executed, leaving an empty white page. The middleware now emits a docs-scoped CSP for/docs,/redoc, and/docs/oauth2-redirectthat allowshttps://cdn.jsdelivr.netfor scripts + styles, the FastAPI/Redoc favicon hosts for images, and'unsafe-inline'for the Swagger boot script — every other route keeps the unchanged stricter SPA policy. - Camera stream second viewer fails / kicks the first off (#1089) — Most Bambu Lab printers only allow one concurrent camera connection (RTSP socket on X1/H2/P2, port-6000 chamber-image socket on A1/P1), but
GET /printers/{id}/camera/streamopened a fresh upstream per viewer keyed on a per-requeststream_id. Two browser tabs / two dashboard cards → the second viewer either failed silently or kicked the first one off. Newservices/camera_fanout.py::MjpegBroadcasterowns a single upstream per printer and fans pre-formatted MJPEG chunks out to N subscriber queues; new viewers tap the existing connection. When the last subscriber leaves, the upstream stays alive for a 5 s grace window so a tab refresh or "open in new tab" doesn't pay an ffmpeg/RTSP reconnect, then tears down cleanly. Per-subscriber queues are bounded (depth 4) so a slow viewer drops frames for itself rather than blocking the broadcaster — live video, old frames have no value. Stop endpoint and app-shutdown both call into the broadcaster's force-shutdown path so subscribers wake up via an upstream-gone sentinel instead of hanging onqueue.get(). External-camera path is unchanged (user-supplied MJPEG/RTSP servers handle multi-viewer themselves). The upstream uses a deterministic{printer_id}-fanoutstream id so every existing prefix-match incleanup_orphaned_streams,camera_status, the snapshot fall-through inmain.py, and thestopendpoint continues to find it without changes. Two follow-up correctness fixes from the audit pass: (1)_stream_start_times[printer_id]is now set withsetdefault()so/camera/statusreports the SHARED upstream's age — previously each new viewer overwrote it, makingstream_uptimejump backward whenever a second viewer attached; (2) the route now retriessubscribe()once onRuntimeErrorto close a tiny race where the grace teardown can flip the broadcaster tostoppedbetween the registry lookup and the subscribe call (the retry forces the registry to mint a fresh broadcaster). Detach log line shows the post-unsubscribe count returned atomically byunsubscribe()— no more two viewers leaving simultaneously both reportingsubscribers=0. Permission gates unchanged:/camera/streamstill requires the existing token (minted byPOST /camera/stream-tokenwithCAMERA_VIEW);/camera/stopstill requiresCAMERA_VIEW; the broadcaster is internal infra with no FastAPI surface. 13 unit tests for the broadcaster (single subscriber, multi-subscriber-shares-one-pump, slow-subscriber-doesn't-block-fast, grace-window teardown, grace-cancelled-on-rejoin, force-shutdown sentinel,iter_subscriberexits on upstream-gone and on client-disconnect, registry replaces stopped broadcasters,subscribe()raises on stopped broadcaster,unsubscribe()returns post-removal count atomically across concurrent leavers, double-unsubscribe is idempotent, and the route's force-shutdown-then-fresh-subscribe retry path) plus 2 new integration tests on the stop endpoint covering the deterministic fan-out stream id and theshutdown_broadcasterwiring. Thanks to swheettaos for the diagnosis and broadcaster sketch. - Uploads to writable external folders silently landed in internal storage (#1112) —
LibraryFolderhas anexternal_readonlyflag, so the model already distinguishes writable from read-only external mounts, butPOST /library/filesrejected only the read-only branch and then unconditionally wrote toget_library_files_dir()with a UUID-scoped filename. The resultingLibraryFilerow linked back to the external folder viafolder_id, so the file showed up in the Bambuddy UI and could be printed, but the bytes physically lived inarchive/library/files/and never touched the mount — invisible from any other machine accessing the same NAS/SMB share. New_resolve_upload_destination()helper detects writable external targets and writes through to<external_path>/<filename>(keeping the original filename so the file is recognisable on the mount), with guards for missing/inaccessible path (400), non-writable mount (400), pre-existing filename on the mount (409 — no silent overwrite; the user is expected to rename and retry, matching how scan treats external files as externally-owned bytes), and aresolve + relative_topath-traversal guard on the joined destination. DB row now matches what scan produces:is_external=True,file_path=<absolute external path>, so the existing download / delete / dedupe paths work unchanged (to_absolute_pathalready fast-pathsis_absolute()inputs, and external-file deletion already bypasses trash and only drops the DB row + internal thumbnail).POST /library/files/extract-zipis now rejected against any external folder (not just read-only) with a clear "extract the ZIP on the external mount and run Scan" message — the nested-subfolder creation path would need tomkdiron the mount and create matchingis_external=TrueLibraryFolderrows, which is a separate design round, and the Scan flow already handles that shape. 7 new integration tests cover: bytes land on the mount; DB row hasis_external=True+ absolutefile_path; filename collision → 409 with prior bytes preserved; vanished external path → 400; path-traversal filename never escapes the external dir; extract-zip into writable external rejected with the Scan hint; root uploads unchanged. - Queue item stuck at "printing" when print failed before reaching RUNNING (#1111) — Dispatching a file sliced for the wrong nozzle size (or any other pre-print error: AMS fault, wrong plate, nozzle not installed, etc.) left the queue item stuck at
status="printing"forever, blocking every subsequent pending item for that printer (check_queueseedsbusy_printersfrom any row in'printing'state and skips further dispatches for those printer IDs). Completion detection inBambuMQTTClient._process_messagerequired the print to have reachedRUNNING— either via_previous_gcode_state == "RUNNING"or the_was_runningfallback — but a nozzle-mismatch failure transitions the printerIDLE → PREPARE → FAILEDwithout ever enteringRUNNING, so neither branch matched andon_print_completenever fired. The diagnostic log line atbambu_mqtt.py:2690("State is FAILED but completion NOT triggered: prev=PREPARE, was_running=False") confirmed the path. Completion now also fires onFAILEDfrom a pre-print state (PREPAREorSLICING) — restricted to those two so a staleFAILEDon first connection (prev=None) still can't accidentally advance an unrelated queue item. Additionally, when a queue item transitions tofailedthe handler inmain.pynow populateserror_messagefrom the printer's current HMS error list, rendered via the existingbackend/app/services/hms_errors.pylookup table (e.g.[0500_4038] The nozzle diameter in sliced file is not consistent with the current nozzle setting. This file can't be printed.) — previouslyerror_messagewas leftNULL, so users saw "failed" with no hint at the cause. 5 new unit tests inTestPrePrintFailureCompletioncover PREPARE→FAILED and SLICING→FAILED firing, IDLE→FAILED and initial-FAILED not firing (boot-time safety), and HMS errors being passed through in the callback payload; 6 new tests intest_hms_error_summary.pycover the error-message formatter (known-code lookup, unknown-code fallback, multi-error join, malformed-entry tolerance, all-malformed → None, empty → None). Thanks to MartinNYHC for the report. - Tailscale cert-renewal restart silently failed mid-way (follow-up to #1070) — The daily renewal path creates an
asyncio.Taskto restart VP services with the new cert. Inside that task,stop_server()/stop_proxy()call_cancel_restart_task(), which cancelled+awaited the currently-running task (itself). The self-await raisedRuntimeError, got caught by the broad exception handler, but the cancel flag was still set — so the nextawaitinstop_serverraisedCancelledErrorand aborted the restart partway through. The VP kept running the
Changelog truncated — see the full CHANGELOG.md for the complete list.