github maziggy/bambuddy v0.2.5b1-daily.20260513
Daily Beta Build v0.2.5b1-daily.20260513

pre-releaseone hour ago

Note

This is a daily beta build (2026-05-13). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Changed

  • Support bundle audited for new features — adds OIDC, 2FA, API keys, library/inventory/queue/maintenance totals, slicer-API reachability, GitHub backup status, per-printer Obico flag; also redacts two settings that were leaking and fixes a reachability-check architecture bug — The support-info.json block in support bundles auto-includes the settings table (with sensitive-key redaction), so settings-stored features like LDAP, Obico globals, integrated slicing URLs, Tailscale, and queue-drying already flowed through. What was missing was anything stored in dedicated tables, which had grown substantially without the bundle being updated. Triaging the recent OIDC / 2FA / group bugs (#1292, #1297) and the X1C slicer investigation involved repeatedly asking reporters for information that should have been in the bundle. New blocks added to _collect_support_info in backend/app/api/routes/support.py: auth — OIDC providers (cleartext name, is_enabled, scopes, email_claim, require_email_verified, auto_create_users, auto_link_existing_accounts, has_default_group, has_icon, linked_user_count; client_id/client_secret/issuer_url stay out of the bundle), 2FA counts (users_with_totp, email_otp_codes_pending), API key counts (total / enabled / expired), long-lived token counts (total / active), group counts (system / custom). librarylibrary_files_total, library_files_in_trash, library_folders_total, external_folders_total, external_links_total, makerworld_imports_total. inventoryspools_internal, k_profiles_internal, k_profiles_spoolman. queuepending_total, manual_start_pending, oldest_pending_age_seconds (catches items stuck because their target printer is offline or filament doesn't match). maintenanceitems_total, items_enabled. integrations.github_backupconfigs_total, providers_used dict (github/gitea/forgejo/gitlab), schedule_enabled_count, last_failure_count. integrations.slicer_apienabled, preferred, bambu_studio_url_set, orcaslicer_url_set, plus an actual 2-second HTTP reachability ping (bambu_studio_reachable, orcaslicer_reachable) to differentiate "URL empty" from "URL misconfigured" from "service down". Per-printer obico_enabled flag added to each entry in printers[], parsed from obico_enabled_printers setting via a new _parse_obico_enabled_printers helper that tolerates legacy comma-separated formats. Plus three smaller but important fixes caught while testing the bundle against a real instance: (1) mqtt_broker value was leaking — the keyword-substring redaction filter at support.py:850 had no entry that matched the mqtt_broker setting name, so the broker IP (e.g. 192.168.255.16) was appearing in cleartext. Added broker to sensitive_keys. (2) virtual_printer_tailscale_auth_key was leaking — same reason, no keyword in the filter matched _auth_key. Added auth_key to the keyword set, AND added a value-prefix safety net (tskey-) so any FUTURE Tailscale setting with an unexpected name still auto-redacts when its value starts with the Tailscale auth-key prefix. (3) Slicer-API reachability check was always returning null / false even when the slicer was up — two root causes stacked. First, the old code passed info["settings"] (already redacted) into _collect_slicer_api_info, so when bambu_studio_api_url had been redacted to "[REDACTED]", the httpx call hit that literal string and crashed; when the setting was empty, the URL came through as "" and the function returned None. Second — caught on the next round of testing — even after switching to read directly from Settings.value, the check only looked at the DB row, but the real slicer routes (archives.py:3174-3180, library.py) resolve the URL with a three-level precedence: DB setting → app_settings.bambu_studio_api_url (which reads the BAMBU_STUDIO_API_URL env var) → built-in default http://localhost:3001. Most installations run the sidecar on the default port or via env var, so the DB-only check returned null even when the slicer was up and reachable. The collector now mirrors the route's exact resolution path. The block now also reports bambu_studio_url_set_in_db: bool and bambu_studio_url_source: "db" | "env_or_default" | "unset" so triage can see WHICH layer supplied the URL — separates "user explicitly configured it" from "they're using the default port" without leaking the URL itself. Two regression tests pin both layers: test_reachability_uses_unredacted_url (no "[REDACTED]" ever reaches _check_url_reachable) and test_env_var_fallback_url_pinged_when_db_setting_empty (DB empty + env-var-set URL is actually pinged and reported reachable). All new collectors are wrapped in try/except so a single failure on one block can't blank the rest of the bundle. OIDC provider names are passed in cleartext deliberately — they're login-button labels (PocketID, Authentik, Google, etc.), not secrets, and provider-specific behavior (Azure handles claims differently from Authentik) is exactly the kind of detail that makes SSO bugs triagable in one round-trip instead of three. 13 new unit tests in backend/tests/unit/test_support_helpers.py cover the obico-parser edge cases, slicer-API reachability with mocked httpx (including the "404 = reachable" decision, the un-redacted-URL regression, AND the env-var-fallback regression), auth-info OIDC-cleartext-but-no-secrets contract, the GitHub-backup provider/failure aggregation, and the new mqtt_broker / virtual_printer_tailscale_auth_key / value-prefix-based redactions.
  • Page headers unified across the app: consistent icon size, placement, and subtitle styling (PR #1272 by EdwardChamberlain, continuation of #1060 / #1203) — Nine pages (Archives, FileManager, Inventory, Maintenance, MakerWorld, Profiles, Projects, Settings, Stats) now share one header pattern: w-7 h-7 bambu-green icon next to a text-2xl font-bold title with a text-bambu-gray mt-1 subtitle underneath, matching the look that landed earlier on Print Queue and Printers. FileManager and Projects dropped their rounded bg-bambu-green/10 rounded-xl p-2.5 icon tile in favor of the plain icon to match the rest. The sidebar's "Queue" nav item is renamed to "Print Queue" (and its icon switched from Calendar to ListOrdered) to match the page header it leads to. The Stats page title is renamed Dashboard → Statistics to match the sidebar nav label that's been pointing at it (the page never was the printer dashboard — Printers is — and the mismatch confused new users; closes a small but recurring source of "where's the dashboard?" support questions). All renames flow through every locale: en/de/fr/it/ja/pt-BR/zh-CN/zh-TW updated for nav.queue, stats.title, plus a new inventory.subtitle key ("Manage your spools" + translations) used by the inventory header. Bonus on top of the stated scope: inventory.toolbar.{filters, view, actions} were untranslated English strings in fr/it/ja/pt-BR/zh-CN/zh-TW — Edward translated them properly in the same pass. StatsPage.test.tsx updated to assert the new "Statistics" title. Build clean, all 35 page tests still pass, i18n parity holds at 4753 leaves across all 8 locales. Maintenance page subtitle keeps its red / amber / green severity color on the "X items due · Y warnings · all up to date" line — the colors carry actual at-a-glance status information, not just visual weight.
  • Bambuddy now identifies honestly as itself on every outbound request to Bambu Lab / MakerWorld / Bambu Wiki — proactive alignment with Bambu Lab's 2026-05-12 statement on cloud access, which draws a clear line between modifying AGPL code (allowed) and "impersonating official clients in communication with our cloud infrastructure" (not allowed). Bambuddy was already on the right side of that line on the main authenticated cloud path (User-Agent: Bambuddy/1.0 in bambu_cloud.py:_get_headers), but three secondary call sites were sending browser User-Agents — originally added under the assumption Cloudflare's WAF would block non-browser identification. Tested on 2026-05-12 with curl -H "User-Agent: Bambuddy/1.0" against all three: https://bambulab.com/api/sign-in/tfa returned HTTP 400 with the expected application-level {"code":5,"error":"Login failed"} JSON (no Cloudflare interstitial), https://api.bambulab.com/v1/iot-service/api/slicer/setting returned HTTP 200 with the full 576 KB settings response, https://makerworld.com/api/v1/design-service/* returned the same response shape as a Firefox UA, and https://wiki.bambulab.com/* served identical HTML to a Chrome UA. The browser-impersonation was unnecessary. All four call sites now send Bambuddy/1.0 (+https://github.com/maziggy/bambuddy) consistently — the URL in parens makes the source unambiguous so Bambu can distinguish our traffic from impersonators if they ever audit it. Files: bambu_cloud.py (TOTP/TFA path no longer spoofs Chrome UA + Origin + Referer + Accept-Language headers — Origin/Referer were spoofing bambulab.com origin, which the new comment block specifically calls out as removed), makerworld.py (Firefox UA replaced; the Referer header is kept because MakerWorld's CSRF / origin-check middleware uses it on some endpoints, which is functional, not identity-faking), firmware_check.py (Chrome UA on the public wiki scraper replaced — wiki has no special handling for our UA). Separately: the /v1/iot-service/api/slicer/setting endpoint requires a version query parameter in Bambu Studio's XX.YY.ZZ.WW format (the API returns HTTP 400 "field 'version' is not set" without it, and HTTP 422 "Invalid input parameters" for non-matching formats like bambuddy-1.0), but Bambu's server accepts ANY value within that format — verified the same 576 KB response with version=99.99.99.99. The previous default "02.04.00.70" is an actual Bambu Studio release version (2.4.0.70). The default is now "1.0.0.0" (held in a new _SLICER_API_VERSION module constant in bambu_cloud.py and re-exported into routes/cloud.py so the two route defaults stay in sync), which satisfies the format requirement without claiming to be a specific Bambu Studio build. Unchanged on purpose: version="2.0.0.0" parameters in create_setting / update_setting payloads are the preset's format version (extracted from current.get("version", "2.0.0.0") for updates, line 443) — they describe the preset schema, not the client, and stay as-is. Two regression tests rewritten to lock in the new behavior: test_verify_totp_uses_honest_bambuddy_user_agent (was test_verify_totp_includes_browser_headers — asserts UA starts with Bambuddy/, asserts Mozilla/Chrome/Origin/Referer are not present) and test_sends_honest_bambuddy_user_agent (was test_sends_browser_like_headers — same shape, plus continues to assert the deprecated x-bbl-* Bambu-app identification headers are still gone). All 4598 backend tests pass.
  • Spoolman weight tracking now uses per-print grams for all spools, matching the internal Filament Inventory (#1119, reported by Moskito99) — Spoolman previously had two mutually-exclusive weight paths: AMS remain%×tray_weight auto-sync (default; only worked for Bambu Lab spools with valid RFID tray_weight) and per-print 3MF-grams tracking (only enabled when "Disable AMS Weight Sync" was toggled on). Non-BL spools without RFID fell through both paths — AMS auto-sync had no tray_weight to multiply, and the inventory_remaining fallback was wiped because activating Spoolman deletes the internal spool_assignment table — so Spoolman never saw a weight update for them. The internal Filament Inventory has no such gap: it always uses per-print 3MF grams as the primary path with AMS-remain% delta as fallback, and it works for every spool type. Spoolman now does the same: per-print tracking runs whenever Spoolman is enabled and is the only writer of remaining_weight. AMS auto-sync continues to maintain spool metadata and slot assignments but no longer touches weight (eliminating the double-count that would otherwise occur for BL spools with both paths active). store_print_data (spoolman_tracking.py:159) had its disable_weight_sync early-return removed; the three sync_ams_tray callsites (main.py:1450 auto-sync, spoolman.py:318 per-printer manual, spoolman.py:517 sync-all) now hard-code disable_weight_sync=True. The spoolman_disable_weight_sync setting is now deprecated and a no-op — kept in the DB/UI for backwards compat. Behavioral consequence for existing users on the default flag (False): live AMS-based remaining_weight updates between prints stop happening; weight updates now arrive once per print completion with 3MF gram precision. Regression test in test_spoolman_tracking.py::test_stores_tracking_when_disable_weight_sync_is_false proves the early-return is gone.

Fixed

  • Deleting an SSO user left orphan OIDC/MFA/camera-token rows on SQLite — blocked re-login and leaked auth state (#1285, PR #1295 by netscout2001) — On SQLite (default deployment) the delete_user route left orphan rows in user_oidc_links, user_totp, user_otp_codes, and long_lived_tokens because the project intentionally runs with PRAGMA foreign_keys=OFF, so the ON DELETE CASCADE declared on those tables never fired. Reported symptom: an admin deleted an OIDC-provisioned user, the user tried to re-login via SSO, the OIDC callback found the orphan UserOIDCLink pointing at the (now missing) user, failed to resolve it, and redirected to account_inactive instead of triggering auto_create_users. The same root cause was leaking MFA secrets (user_totp), pending email OTP codes (user_otp_codes), and per-user camera-stream tokens (long_lived_tokensverify() would happily match by lookup_prefix even after the owning user was gone). PostgreSQL deployments were unaffected — cascade was firing there. Fix: mirrors the existing APIKey cleanup pattern in delete_user (introduced in PR #1182). backend/app/api/routes/users.py:delete_user now explicitly deletes UserOIDCLink, UserTOTP, UserOTPCode, and LongLivedToken rows owned by the user; also folds in PrintBatch.created_by_id cleanup (same ondelete=SET NULL SQLite-FK-off root cause, the SET NULL block at users.py:393-407 was missing it). backend/app/core/database.py:run_migrations gains an idempotent startup orphan-cleanup that sweeps the four auth tables (DELETE FROM <table> WHERE user_id NOT IN (SELECT id FROM users)), wrapped in begin_nested(), logged at INFO only when rows actually drop — so installations carrying orphans from before the fix are healed automatically without manual DB intervention. No-op on Postgres (cascade already fired) and idempotent on SQLite (second run finds nothing). backend/app/api/routes/mfa.py:list_oidc_links returns "<deleted>" for provider_name when link.provider is null instead of raising AttributeError — covers the symmetric edge case where a UserOIDCLink could reference an orphaned provider. Tests: 14 new/extended. test_users_auth_cleanup.py (new): 5 tests verify delete_user removes OIDC/TOTP/OTP/long-lived-token rows individually + combined-cleanup atomically. test_oidc_relogin.py (new): full end-to-end test reproducing the #1285 symptom — mocked IdP, first OIDC login, admin delete, second OIDC login proves auto_create_users fires again (and pinned the regression boundary by confirming the test fails without the fix). test_orphan_auth_cleanup_migration.py (new): 7 tests for per-table cleanup across all four auth tables, idempotency, no-op on fresh install, and survival of rows belonging to real users. test_mfa_api.py adds TestListOidcLinksDefensiveProviderNull for the null-check. test_auth_api.py::test_delete_user extended to assert all five auth-table side effects (UserOIDCLink, UserTOTP, UserOTPCode, APIKey, LongLivedToken). All 13 PR-added tests + 194 tests in extended files pass; ruff clean. Reported and patched by netscout2001.
  • Slicer bundle import 400/502/503 errors now land in the log so support bundles tell us why (#1312, reported by hasmar04) — Reporter hit 400 Bad Request from POST /api/v1/slicer/bundles when uploading a Bambu Studio Printer Preset Bundle (.bbscfg); a second contributor had reported the same shape the day before. Same bundle file uploaded fine on Martin's dev machine, which strongly points at sidecar-side differences (image version, write permissions on DATA_PATH/bundles, TrueNAS Docker volume perms, etc.) — but triage was blocked because the sidecar's actual reject reason only made it as far as the FE toast. Bambuddy logged just the uvicorn-access line (POST /api/v1/slicer/bundles HTTP/1.1 400), with no detail in the support bundle. The route at backend/app/api/routes/slicer_presets.py:import_slicer_bundle now emits a logger.warning for each of the three failure shapes: 400 (SlicerInputError) — sidecar's reject string is logged alongside the filename and byte count, so we can see "bundle rejected because manifest.json is missing" in the next support bundle without asking the reporter to copy the toast text. 503 (SlicerApiUnavailableError) — logs the configured sidecar URL plus the exception detail (separates "URL wrong" from "sidecar offline"). 502 (SlicerApiError) — logs filename + byte count + error string, useful when the sidecar's DATA_PATH/bundles write fails (the typical 5xx cause on this path). The 400 case is WARNING rather than INFO deliberately — it's an unexpected end-user-visible failure, not a routine event. Existing test_import_bundle_sidecar_400_passes_through now also asserts the reject reason AND the filename appear in caplog, so the support-bundle-includes-the-diagnostic contract is pinned. Doesn't fix #1312's actual root cause (sidecar-side, still under investigation with reporter) — but the next reporter we get on this code path will produce a bundle that contains the answer.
  • Restarting Bambuddy mid-print triggered plate-check pause + duplicate archive (#1304, reported by kleinwareio) — When a P1S print was in progress and the user updated the Bambuddy container (latestdaily in the report, but the same path fires on any restart), Bambuddy paused the live print with an "Object detected on build plate" warning AND re-archived the in-progress file as a duplicate. Root cause: the print-start detector at backend/app/services/bambu_mqtt.py:2780 gated on self._previous_gcode_state != "RUNNING", which is true whether we just saw IDLE→RUNNING (a real print start) OR we just constructed a fresh BambuMQTTClient and _previous_gcode_state is still its initial None (catch-up push from a printer already running). The fresh-client case fired on_print_start, which downstream ran the plate-detection-and-pause flow at main.py AND the FTP-download-and-archive flow — exactly the two symptoms in the bug report. Fix: added self._previous_gcode_state is not None to the is_new_print guard, so the first push from the printer in a new process lifetime never counts as a state transition into RUNNING. _was_running still flips to True via the unconditional "Track RUNNING state" block at bambu_mqtt.py:2795, so print-completion detection keeps working — only the start callback is suppressed. Three existing tests that asserted on the old (buggy) behavior were updated to seed _previous_gcode_state = "IDLE" first, matching the realistic lifecycle of a print actually starting (Bambuddy has been observing IDLE/FINISH before RUNNING); they now exercise the correct path. New regression test test_first_running_push_after_bambuddy_restart_does_not_fire_print_start pins the contract for the reporter's exact scenario — and asserts that _was_running still becomes True so completion still fires when the print ends. The is_file_change branch was unaffected (it already required _previous_gcode_file is not None, so restart-catch-up never reached it anyway).
  • Create User form rejected weak passwords with an opaque "HTTP 422" toast (#1303, reported by TrickShotMLG02) — Three independent UX gaps stacked on top of each other. (1) Discoverability: the Create User and Edit User modals showed no hint about the backend's password complexity requirements (min 8 chars + uppercase + lowercase + digit + special character; enforced in backend/app/schemas/auth.py:_validate_password_complexity). Reporter typed an 8-character all-digits password and had no way to know why it failed. (2) Validation mismatch: the frontend's pre-submit check at SettingsPage.tsx was only password.length < 6, accepting passwords the backend would reject — every weak password got bounced after the round-trip instead of getting blocked locally. (3) Error display fragility: when the backend returned a 422 with a Pydantic detail array, the API client's error parser at frontend/src/api/client.ts:107 could fall through to the bare HTTP ${status} fallback if the mapped/filtered detail array ended up empty after stripping the "Value error, " prefix — masking the real reason as just "HTTP 422". Fixes: (1) added a passwordRequirements helper line under both password inputs in Create User / Edit User; (2) extracted checkPasswordComplexity into frontend/src/utils/password.ts, called from handleCreateUser and handleUpdateUser before the API request — it returns the same FIRST failing rule the backend's validator would have flagged (uppercase before lowercase before digit before special, matching _validate_password_complexity's order — fixing one rule shouldn't immediately trip a different message), and the submit button is disabled until all rules pass; (3) the API client now falls back to JSON.stringify(detail) when the mapped array is empty, so a malformed but non-empty 422 detail surfaces SOMETHING informative instead of a bare status code. New translation keys settings.passwordRequirements, settings.toast.passwordNeeds{Uppercase, Lowercase, Digit, Special}, plus the existing passwordTooShort text updated from "6 characters" to "8 characters". English + German fully translated (German reporter's locale); FR/IT/PT-BR translated using straightforward equivalents; JA/ZH-CN/ZH-TW seeded with English for the new complexity messages (existing project flow for new strings). 7 new unit tests in frontend/src/__tests__/utils/password.test.ts pin the validator's contract, including the reporter's exact "12345678" input which now produces a local "Password must contain at least one uppercase letter" toast instead of a 422 round-trip.
  • External NAS scan hung forever and never committed subdirectories (#1299, reported by joeferrante) — Linking an external mount with ~1200 subdirectories caused the "Link External Folder" modal to spin until the FE gave up, after which the mount appeared in the sidebar but with no subdirectories, and subsequent scans had no effect either. The reporter's support bundle pinpointed two compounding problems. (1) TypeError: unsupported operand type(s) for /: 'str' and 'str' on every STL — 1,606 instances in the log. generate_stl_thumbnail at stl_thumbnail.py:119 does thumbnails_dir / thumb_filename, which requires a Path, but the external-scan call site at library.py:1256 passed both arguments as str (generate_stl_thumbnail(str(filepath), str(thumb_dir))). Every STL crashed inside the try/except and got logged at WARNING level — visible spam but more importantly wasted work (trimesh.load() and matplotlib setup ran before the failing division). Fix: defensive Path() coerce at the top of generate_stl_thumbnail so the function works regardless of how callers pass args. Regression test test_string_arguments_accepted_without_typeerror pins the contract. (2) Scan ran STL thumbnail generation synchronously inside the HTTP request — even after fix (1), trimesh.load() + matplotlib render is 1–5 seconds per STL; on a NAS with thousands of STLs that's hours of work blocking the modal. Frontend would time out, user would refresh, the HTTP request would be cancelled, db.commit() at library.py:1331 would never run, and no folder/file rows would be committed — which is exactly why "subsequent scans have no effect" (each retry started from scratch and hit the same wall). Fix: scan now defers STL thumbnails to a background task. After db.commit(), the route spawns asyncio.create_task(_backfill_external_stl_thumbnails(folder_ids)) with the full set of folder IDs from folder_cache.values() (covers both pre-existing subfolders AND the ones created during this scan — all_folder_ids is snapshotted before the walk and would have missed the new ones), then returns immediately. The background task opens its own async_session, walks every STL file with thumbnail_path IS NULL in the linked folder tree, generates each thumbnail, and commits per-file so a server restart mid-run only loses the in-flight thumbnail. Survives FE refresh because the task lives in the FastAPI event loop, not the request scope. The reporter's smaller mount (/mnt/NAS_3d_files/3mf_Files, 4 subdirectories) used to work because it completed inside the FE timeout window — with this fix, the 1200-subdir parent mount completes equally fast and thumbnails fill in over the following minutes. Auto-scan after create unchanged: FileManagerPage.tsx:1147-1151 still calls scanExternalFolder immediately after createExternalFolder, which is correct UX — what changed is that the scan response now arrives in seconds instead of timing out.
  • MakerWorld "Open Cloud settings" link landed on the wrong page (#1300) — On the MakerWorld page, the "Open Cloud settings" hyperlink shown in the sign-in-required banner (when no Bambu Cloud token is stored) pointed at /settings?tab=cloud. The Settings page has no cloud tab (its tabs are general/plugs/notifications/queue/filament/network/apikeys/virtual-printer/spoolbuddy/failure-detection/users/backup), so the URL-param check at SettingsPage.tsx:179 (validTabs.includes(tabParam) ? tabParam : 'general') silently fell back to the General tab. The Bambu Cloud login UI actually lives on the Profiles page (/profiles), which already defaults its sub-tab to cloud — the same destination the existing backup.cloudLoginRequired i18n string ("Sign in under Profiles → Cloud Profiles…") documents. One-line fix in MakerworldPage.tsx:438: to="/settings?tab=cloud"to="/profiles". The Profiles page's useState<ProfileTab>('cloud') (line 2822) means no query param is needed — landing on /profiles opens the Cloud sub-tab directly.
  • External-spool prints no longer credit usage to AMS slot 0's Spoolman spool (#1276, reported and diagnosed by ojimpo — regression of #853) — On a single-filament external-spool print (TPU loaded in vir_slot id=254 on the reporter's H2S + AMS 2 Pro), _resolve_global_tray_id in spoolman_tracking.py was crediting the usage to whatever Spoolman spool happened to be linked to AMS slot 0 — a completely unrelated material in the reporter's case. ~48.94 g of TPU was credited to a PLA spool across 4 prints before they noticed. Root cause: BambuStudio encodes virtual tray IDs (254/255) as -1 in the flat ams_mapping array it sends to the printer (a convention already documented in bambu_mqtt.py:start_print()), but the spoolman tracking helper was treating -1 as "unmapped → use position-based default" and the default mapped slot_id=1global_tray_id=0. When slot_to_tray[slot_id-1] == -1 and ams_trays contains an external slot (254 or 255), the helper now returns the external tray ID directly, matching the convention start_print() uses on the other side of the pipeline. Prefers 254 over 255 (consistent with single-nozzle tray_now reporting and the vir_slot id=255→254 remap in bambu_mqtt.py:864). Legacy behavior preserved when ams_trays is empty or contains no external slot (callers that don't pass ams_trays keep the position-based fallback). Two regression tests cover the reporter's exact scenario (ams_trays={0,1,2,3,254}, slot_to_tray=[-1] → 254) plus the H2D-deputy case and the fall-through-when-no-external case. Root cause investigation and patch by ojimpo.
  • Virtual-printer queue mode now honors workflow default print options (#1235, reported by jc21, root cause and patch by jc21 in #1277) — Prints sent from Bambu Studio (or any slicer) to a VP in print_queue mode arrived in the queue with bed_levelling, flow_cali, vibration_cali, layer_inspect, and timelapse set to the SQLAlchemy column-level defaults, never the user's workflow preferences. The reporter happened to have every workflow default set to the opposite of the column defaults, so prints appeared to have all five options inverted; every queue item required hand-editing before dispatch. The manual POST /print-queue/ endpoint reads these fields off the request body (the frontend pulls them from settings before submitting), but the VP-FTP-receive path at backend/app/services/virtual_printer/manager.py:_add_to_print_queue constructed PrintQueueItem without touching them at all — SQLAlchemy then filled in bed_levelling=True, flow_cali=False, vibration_cali=True, layer_inspect=False, timelapse=False regardless of what was in the DB. Fix reads default_bed_levelling / default_flow_cali / default_vibration_cali / default_layer_inspect / default_timelapse via the existing get_setting() helper (same pattern already used in the function for virtual_printer_archive_name_source) and passes them explicitly to PrintQueueItem. A small _bool_setting() helper maps None → AppSettings schema default, so a fresh install with no workflow page customization behaves identically to before. Regression tests: test_add_to_print_queue_uses_workflow_defaults_from_settings (verifies all five settings flow through with values opposite to the column defaults, matching the reporter's exact scenario) and test_add_to_print_queue_falls_back_to_schema_defaults_when_unset (verifies the no-DB-row path).
  • Linking a Spoolman spool to an AMS-HT slot no longer fails with a CHECK constraint error (#1274, reported by guillaume.houba) — On H2C / H2D, AMS-HT units report ams_id 128+ (one ams_id per unit, single tray). The spoolman_slot_assignments table's ck_ams_id_range constraint only allowed 0-7 (standard AMS) or 255 (external), so the upsert on POST /spoolman/inventory/slot-assignments blew up with IntegrityError: CHECK constraint failed: ck_ams_id_range and the user had no way to link any spool to an AMS-HT slot. Widened the constraint formula to (ams_id >= 0 AND ams_id <= 7) OR (ams_id >= 128 AND ams_id <= 191) OR ams_id = 255 — matches the value range the internal spool_assignment table already accepts and leaves room for up to 64 AMS-HT units (the existing bambu_mqtt/usage-tracker code uses the same 128-based addressing). Updated in the ORM model (models/spoolman_slot_assignment.py) and both the SQLite/Postgres CREATE TABLE DDL in core/database.py. New idempotent migration _migrate_widen_spoolman_slot_ams_id_range: Postgres path runs DROP CONSTRAINT IF EXISTS + ADD CONSTRAINT (no data risk — the new formula is strictly wider than the old); SQLite path detects the stale formula in sqlite_master, table-rebuilds via the standard _v2 rename pattern used elsewhere in this file (_migrate_update_auto_link_constraint at database.py:418), and leaves pre-constraint legacy tables untouched. Tests: test_ams_id_check_admits_ams_ht_range (ORM + DDL formula) and test_assign_accepts_ams_ht_id (end-to-end POST /slot-assignments with ams_id=128).
  • X2D live camera stream no longer cut by Obico polling / snapshot capture (#1271, reported by clabeuhtegrite) — The MJPEG fan-out broadcaster from #1089 lets multiple browser viewers share one upstream RTSP socket per printer, but internal callers (Obico AI polling at the user's configured obico_poll_interval, and the manual /camera/snapshot endpoint) still opened their own fresh RTSP connections. X1C / H2D / P2S firmware tolerates brief concurrent camera sockets so the gap was invisible there. X2D firmware 01.01.00.00 (and likely future firmwares) enforces strict single-camera-connection more aggressively: every Obico poll (default every 5 s) kicked the live stream, the broadcaster paid the multi-second RTSP handshake to reconnect, and the user saw the stream cut "all the time." New helper try_get_active_buffered_frame(printer_id) at api/routes/camera.py:74 returns the broadcaster's last buffered frame (always <1 s old while any viewer is connected) and None when no viewer is active. Obico's _capture_frame and the /camera/snapshot endpoint check it first and only fall through to a fresh socket when no stream is running — preserving today's behavior when nobody is watching. plate_detection and layer_timelapse deliberately not converted: plate-detection needs guaranteed-fresh frames post-print (false-positive risk if the user already grabbed the print in the same second), and layer-timelapse is for external cameras only. Regression tests: test_camera_snapshot_reuses_buffered_frame_when_stream_active and two TestCaptureFrameSharesBroadcasterUpstream Obico tests.
  • Usage tracker: spool swaps in UNUSED slots mid-print no longer charge the old spool (#1269, reported by maugsburger) — Path 2 of the usage tracker (AMS remain% delta fallback) iterated every AMS tray that had a remain% delta, even slots the print never touched. When a user swapped spools in an unrelated slot during a print, the new spool reports remain=0 (no RFID tag yet) while the snapshot from print-start was 100%, so the fallback charged the originally-assigned spool the full 1000 g. Reporter's case: single-filament print on AMS0-T3 (ams_mapping=[3]), swapped a spool in T1 and another in T2 to refill while the print continued — wound up with Spool 27 consumed 1000.0g (100%) on printer 1 AMS0-T1 and Spool 24 consumed 170.0g (17%) on printer 1 AMS0-T2, neither of which were ever in the print. Fix: the fallback now builds print_used_keys from session.ams_mapping, state.tray_change_log, and session.tray_now_at_start (the three runtime signals telling us which trays were actually part of the print), converts each global tray ID to (ams_id, tray_id) using the standard convention (254/255 → external, ≥128 → AMS-HT, otherwise id // 4, id % 4), and skips fallback for trays whose key is not in that set. When all three signals are empty (legacy edge case: no slicer push, no MQTT tray-change events, no tray_now at start) the legacy "scan every tray" behavior is preserved so we don't regress prints with no metadata. Regression test in test_usage_tracker.py::test_skips_fallback_for_trays_outside_print_mapping reproduces the reporter's exact scenario.
  • Printer card: smart-plug live wattage now rounded to whole watts (#1266, reported by Carter3DP) — The printer card's smart-plug status badge rendered plugStatus.energy.power raw, so plugs that report fractional watts (Kauf PLF12 via ESPHome / Home Assistant in the reporter's case, but any MQTT plug pushing a float can hit this) showed values like 14.123456789012 W and overflowed the card width. SmartPlugCard and SwitchbarPopover already wrapped the same field in Math.round(); only the printer-card badge was missing the round. Single-line fix at frontend/src/pages/PrintersPage.tsx:4569.

Added

  • Build-plate icon on archive cards + uniform printer/model line (#1253, reported by tonygauderman) — Archive cards now show an OrcaSlicer-style bed icon in the printer/model row indicating which build plate the print was sliced for (Cool / Cool SuperTack / Engineering / High Temp / Textured PEI / Smooth PEI), with the full plate name in the hover tooltip. Closes the gap where users had to remember which plate matched a re-print or open the source 3MF in a slicer just to read the bed setting. Card row also unified: archives with a real Bambuddy-printer association used to render as H2D-1 GCODE … while slicer-only uploads rendered as Sliced for X1C GCODE … — same line, two different shapes. Dropped the Sliced for prefix so both render as a uniform <name-or-model> [bed-icon] GCODE <hash> row, scanning the same regardless of provenance. Backend: new bed_type column on print_archives (idempotent ALTER TABLE migration; SQLite + Postgres safe), populated from curr_bed_type in Metadata/slice_info.config (per-plate metadata, the authoritative source — that's the bed type that actually got sent to the printer for the exported plate) with a fallback to Metadata/project_settings.config's top-level curr_bed_type for older 3MF shapes. Wired through both code paths that produce archive responses: archive_to_response() (the hand-rolled dict converter at archives.py:97 — easy to miss, the schema-only change is silently dropped by Pydantic since the route bypasses from_attributes) and the /rescan endpoint, so old archives can be re-parsed by the user via the existing per-archive Rescan button. Newly-ingested archives get the value automatically. Backfill script: scripts/backfill_archive_bed_type.py (with --dry-run) re-opens every NULL archive's 3MF on disk and populates the column — opt-in for users who want their entire history covered without waiting for natural turnover. Auto-loads .env from project root before importing backend modules (since core/config.py:52 reads DATABASE_URL from os.environ at import time, not from pydantic-settings at Settings() time), prints the resolved DB URL with credentials redacted on every run so operators can confirm they're hitting the intended database (Postgres / SQLite — Bambuddy supports both per #1219's DATABASE_URL pathway), and calls init_db() itself before querying so the migration applies even if the script is run against a database the backend hasn't touched yet. Frontend: 6 OrcaSlicer-style PNGs ship in frontend/public/img/bed/ (under /img/ because that path was already statically mounted at main.py:5244; the /bed-icons/ toplevel attempted first hit the SPA catch-all and returned index.html as text/html, which the browser then rendered nothing for). New utils/bedType.ts maps slicer strings (case-insensitive) to icon + human-readable label; covers Bambu Studio and OrcaSlicer's diverging spellings for the same physical plate (e.g. Cool PlatePC Plate, Cool Plate (SuperTack)Supertack PlateBambu Cool Plate SuperTack). Renders on both card-grid view and list view in ArchivesPage.tsx. Unmapped or NULL bed_type simply omits the icon, so cards stay clean for archives created before this change. Note on icon mapping: bed_pei.png → Textured PEI, bed_pei_cool.png → Smooth PEI is a best-guess from the OrcaSlicer asset names — swap the two paths in bedType.ts if a future user reports the icons reversed for their plate.
  • Spool labels: new 40×30 mm template, hex colour code, bolder brand line (#809 follow-up, requested by oliboehm) — Three small enhancements to the spool-label printer rolled into one change. (1) New box_40x30 template — 40×30 mm single label, common DK/Brother roll size. Added to _SINGLE_LABEL_SIZES_MM in backend/app/services/label_renderer.py and to the request body's Literal[...] enum in backend/app/api/routes/labels.py; height is ≥ 20 mm so it routes through the existing roomy layout (swatch + QR + full text column). (2) Colour hex code on every label — new _hex_code_label() helper formats data.rgba as #RRGGBB (alpha-stripped, uppercased to match the inventory UI's colour-picker convention) and returns "" for missing/malformed input so the caller skips drawing instead of throwing. Rendered as a small line under the material/subtype line in the roomy layout, and as a third line above the spool ID in the tight (AMS) layout — useful when several near-identical material/colour spools sit next to each other in the AMS or on a shelf. (3) Brand line bigger + bold — the brand on every label now renders in Helvetica-Bold instead of Helvetica regular, with size bumped 5.5pt → 6.5pt on the tight layout and 7pt → 8pt on the roomy layout, so it's the most legible non-ID field at arm's length. Wiring: SpoolLabelTemplate union in frontend/src/api/client.ts extended with 'box_40x30'; LabelTemplatePickerModal gets a new TEMPLATE_OPTIONS entry for it; inventory.labels.templates.box40x30.{label,hint} keys added across all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native, with the existing per-key fallback in the modal as a safety net). The 5-template grid still wraps to 2 columns on small viewports per #1230's fix; modal regression test was widened from 4 to 5 template buttons. Tests: ALL_TEMPLATES parametrize tuple in test_label_renderer.py extended with box_40x30 so all 7 generic invariants (PDF header, empty-input, multi-colour, missing-fields, malformed-rgba, long strings, sheet pagination) cover the new template; new test_hex_color_code_rendered_when_rgba_set (asserts #F5E6D3 appears in the uncompressed PDF for both 40×30 and 62×29), test_hex_color_code_skipped_when_rgba_invalid (regex pin: no #RRGGBB shape on the label when rgba is malformed, except the spool ID's #42), and test_brand_rendered_in_bold_per_809_followup (asserts Helvetica-Bold font reference is in the PDF — caught a regression if the brand line ever reverts to regular weight). All 33 backend tests + 15 frontend modal tests pass; ruff clean.
  • Copy spool — duplicate any spool's settings into a fresh inventory row in two clicks (#1234, PR #1246 by MiguelAngelLV) — Adds a copy button (Copy icon) next to the existing edit button on every spool in the inventory page across all three views (table row, card, grouped table inner row). Clicking it opens the existing SpoolFormModal pre-filled with every field from the source spool — material, brand, color, slicer preset, label/core/cost, K-profiles, all of it — except weight_used which is reset to 0 (since the new spool starts full) and the RFID identity fields (tag_uid, tray_uuid, tag_type, data_origin) which aren't part of the form payload anyway, so the new spool is its own physical roll. Save calls api.createSpool (or api.createSpoolmanInventorySpool in Spoolman mode — both inherit the dispatch routing for free). Closes the long-running gap where users with many near-identical spools (e.g. five 1 kg PETG-CF rolls bought in a single order) had to re-enter every field from scratch on each one. Implementation shape: SpoolFormModalProps.mode: 'create' | 'edit' | 'copy' (exported as SpoolFormMode) replaces the previous isEditing = !!spool heuristic — every existing call site in InventoryPage.tsx was updated to pass the explicit mode, and the modal's title / submit-button label / weight-reset gate / submit-route branching all key on mode directly. The onCopy callback is optional on SpoolCard, SpoolTableRow, and SpoolTableGroup (matches the existing onPrintLabel? pattern), so the button is conditionally rendered and other consumers of those subcomponents don't get a copy affordance forced on them. Card-view and table-row buttons stop click propagation so clicking copy doesn't also fire the parent row's edit handler. Quick Add interaction: the Quick Add toggle is gated mode === 'create' (was !isEditing), so it stays out of copy mode — otherwise a user could enable Quick Add and bump quantity to N under the singular "Copy Spool" title and silently bulk-create N copies via bulkCreateMutation. i18n: new inventory.copySpool key across all 8 locales (en + de translated, fr/it/ja/pt-BR/zh-CN/zh-TW seeded with English fallback per project flow). Tests: 3 new in SpoolFormModal.test.tsx (SpoolFormModal copy mode describe block — title shows "Copy Spool", save calls createSpool not updateSpool, weight_used reset to 0 in the create payload when copying a spool with non-zero usage), 2 new in InventoryPageCopyButton.test.tsx (table-row copy button click → "Copy Spool" heading, cards-view copy button click → same heading after switching view modes) — guards against the three call sites drifting apart. Existing SpoolFormBulk.test.tsx and SpoolFormModal.test.tsx renders that omitted the mode prop were updated with the explicit mode="create" so the tightened Quick Add gate doesn't hide the toggle from them. Both InventoryPageCopyButton.test.tsx and InventoryPageDeepLink.test.tsx gained MSW handlers for the modal's open-time fetches (/api/v1/cloud/status, /api/v1/cloud/local-presets, /api/v1/cloud/builtin-filaments, /api/v1/inventory/color-catalog, /api/v1/inventory/spool-catalog, /api/v1/printers/) — without them MSW passes through to the real network, ECONNREFUSEs, and the rejected fetch resolves after the test environment is torn down, surfacing as a flaky "window is not defined" unhandled rejection in the modal's setLoadingCloudPresets(false) finally block (pre-existing flake hit ~1 in 3 full-suite runs at PR head).

Fixed

  • .bbscfg Printer Preset Bundle import was broken for every user since launch — sidecar compose file pointed at the wrong branch (#1312, reported by hasmar04, confirmed by netscout2001) — slicer-api/docker-compose.yml's build.context pointed at https://github.com/maziggy/orca-slicer-api.git#bambuddy/profile-resolver, but the POST /profiles/bundle endpoint plus the uploadBundle multer middleware were only ever committed to a sibling branch bambuddy/bundle-import (commit a3172c5, 2026-05-06). Every user who ran the documented docker compose up -d got a sidecar without the bundle endpoint — their POST /profiles/bundle fell through to the generic POST /profiles/:category handler, which either rejected with "Name cannot be empty" (no name form field sent) or "Invalid file type. Only JSON files are allowed." (the JSON multer filter rejecting the .bbscfg). Fix: bambuddy/bundle-import fast-forward-merged into bambuddy/profile-resolver in the orca-slicer-api repo and pushed, so the compose file's existing branch ref now points at the right commit. No Bambuddy code change. Existing users rebuild with cd slicer-api/ && docker compose --profile bambu build --no-cache --pull && docker compose --profile bambu up -d--pull is the key flag because BuildKit caches the git fetch context separately from layer caches, so --no-cache alone silently reuses the old branch checkout. New users on 0.2.5+ are unaffected. Lesson on diagnosis flow: the wrong root cause was reported twice during triage before the actual branch mismatch was caught — first as "build a week ago, before the bundle endpoint existed" (correct claim for the wrong branch), then as "rebuild with --pull" (still hit the same bug because the compose file pointed at the branch that never got the work). The reporter's third round of logs — the multer "Only JSON files are allowed" error string from upload.js:17, which only matches uploadJson not uploadBundle — was the smoking gun that no amount of rebuilding would help because the wired-up branch genuinely lacked the endpoint.

Changed

  • Support bundle records slicer-API CLI versions; wiki sidecar-update docs hardened (#1312 follow-up) — Triage scaffolding added during investigation of the bundle-import bug above. Useful independent of that fix: the next time a user reports a sidecar-related failure, the support bundle will identify which slicer CLI version is actually running without needing a manual curl /health. Backend: new _fetch_slicer_health(url) helper in backend/app/api/routes/support.py does a 2-second GET on <sidecar>/health, parses the JSON, and walks every non-dataPath key under checks looking for a version field — needed because the wrapper labels both bambu-studio-api and orca-slicer-api as checks.orcaslicer regardless of which CLI is actually bundled (cosmetic wrapper bug, not Bambuddy's). _collect_slicer_api_info now calls it instead of the bare reachability ping and adds two new fields per side to the integrations block: bambu_studio_version, orcaslicer_version. Captures "unknown" verbatim when the wrapper's --help regex didn't match (which is itself diagnostic). Behavior preserved on error paths: empty URL returns None, connection failure returns {reachable: False, version: None}, malformed/non-200 returns {reachable: True, version: None} so the reviewer can separate network failure from misconfiguration. Trailing-slash in the configured URL is stripped before appending /health. Tests: 9 new in TestFetchSlicerHealth; existing TestCollectSlicerApiInfo tests updated to patch _fetch_slicer_health and assert the new _version fields. All 62 helper tests pass; ruff clean. Docs: bambuddy-wiki/docs/features/slicer-api.md got four additions. (1) Quick Start gains a warning callout that the Compose file builds from a branch tip and a plain docker compose up -d will keep using the originally-built image. (2) The Updating section now recommends docker compose --profile bambu build --no-cache --pull (both flags) and explains why both matter. (3) New troubleshooting entry for the "Name cannot be empty" / "Only JSON files are allowed" .bbscfg import error. (4) New troubleshooting entry for the orphan-container conflict (container name "/bambu-studio-api" is already in use) that hits users whose existing containers were built from an older compose file with un-prefixed image tags. The pre-existing /health version: "unknown" entry also got a note clarifying that the wrapper mislabels the checks field as orcaslicer for both sidecars — both are cosmetic, not stale-image indicators.

Fixed

  • LDAP settings: "Advanced" collapsible section header was always rendering in English regardless of UI language (#1297, reported by Fuechslein) — LDAPSettings.tsx:352 calls t('settings.ldap.advanced') || 'Advanced', but the translation key was never defined in any locale file. The || 'Advanced' fallback kicked in and the header rendered as English in every language. Added settings.ldap.advanced to all 8 locales: Advanced (en), Erweitert (de), Avancé (fr), Avanzate (it), 詳細設定 (ja), Avançado (pt-BR), 高级 (zh-CN), 進階 (zh-TW). No component change needed — the fallback now never triggers because the key resolves properly. i18n parity check holds at 4754 leaves across all locales.
  • Clear Plate button required granting Settings > Read Settings, leaking the entire Settings UI to non-admin users (#1293, reported by Tivonfeng) — On the Printers page, the "Clear Plate" button is gated on the global require_plate_clear setting being true. The page reads that value from GET /api/v1/settings, which requires Permission.SETTINGS_READ. A user with printers:clear_plate but no settings:read got a 403 on the settings fetch, the frontend's settings query stayed undefined, requirePlateClear evaluated to false, and the button never rendered. The reporter's workaround — also grant settings:read — works but also adds the Settings nav item to the sidebar and grants visibility of SMTP/LDAP/MQTT credentials and every other setting in the DB, which is exactly the leak they were trying to avoid. Fix: new GET /api/v1/settings/ui-preferences endpoint that returns a curated dict of UI rendering fields without requiring SETTINGS_READ — matches the existing GET /settings/default-sidebar-order precedent (intentionally unauthenticated for the same reason — UI rendering needs values that aren't admin-gated). Exposed fields are explicitly opt-in via a _UI_PREFERENCE_FIELDS tuple in routes/settings.py: require_plate_clear, check_printer_firmware, camera_view_mode, time_format, date_format, drying_presets, ams_humidity_good, ams_humidity_fair, ams_temp_good, ams_temp_fair, bed_cooled_threshold. Anything not on that list — including every sensitive field — is never returned, no matter what's in the DB. PrintersPage now fetches from /settings/ui-preferences via a new api.getUiPreferences() client method; the cache key changed from ['settings'] to ['ui-preferences'] so it doesn't collide with the admin-gated full settings query other admin pages still use. As a side-effect, the page's 4 other settings-driven UI features (drying presets, camera view mode, time format display, firmware-check banner) also stop silently degrading for non-admin users — they all live on the same fetch. Regression tests in backend/tests/integration/test_settings_ui_preferences.py pin: endpoint returns 200 without SETTINGS_READ, response includes require_plate_clear as a bool, field set exactly matches _UI_PREFERENCE_FIELDS (so accidentally adding a sensitive field there fails the test), and a "secret canary" test that seeds 23 sensitive keys with recognizable values and asserts none of them appear in either the response keys or the response body. Frontend types in client.ts tighten camera_view_mode and time_format to the same literal unions as AppSettings so the new endpoint slots into PrinterCard's prop types without casts.
  • LDAP user logins wiped manually-assigned BamBuddy groups (#1292, reported by Fuechslein) — When an admin assigned an LDAP-authenticated user a BamBuddy group that wasn't mapped from LDAP (e.g. "Administrators" while the LDAP mapping only covered "Users"), the assignment vanished on the user's next login. The reporter's observation matched the code exactly: assigning a group while the user was logged in held until the next login because user.groups was just mutated in memory; on next login, _sync_ldap_user in backend/app/api/routes/auth.py:1187 rebuilt user.groups from LDAP state alone and blew away the manual assignment. The design intent (LDAP truth must propagate, including revocation) was correct, but the implementation was over-broad — every BamBuddy group got wiped, not just LDAP-mapped ones. Fix: _sync_ldap_user now computes the set of "LDAP-managed" BamBuddy group names = values of ldap_group_mapping{ldap_default_group}. Groups inside that set are still rebuilt from LDAP truth on each login (so revocation works). Groups outside that set are treated as manual admin assignments and preserved. The partition happens via a list comprehension over user.groups; no schema or DDL change. Edge case explicitly tested: a manual assignment to a group that IS in the LDAP mapping is still overridden by LDAP state — once an assignment is in the user_groups table you can't tell manual-but-mapped from LDAP-derived, so LDAP wins for any group it has authority over. Regression tests in backend/tests/integration/test_ldap_group_sync.py cover: manual group survives login (the reporter's exact scenario), revocation still propagates for LDAP-managed groups, default_group persists across empty-LDAP logins, manual assignment to a managed group is overridden, and the realistic mixed case where a user has multiple manual + multiple LDAP groups at once.
  • Internal inventory: storage_location field was silently dropped on save and never shown in the table (#1291, reported by needo37) — The storage_location column existed on the Spool ORM model (backend/app/models/spool.py:57) but was missing from the Pydantic schemas in backend/app/schemas/spool.py (SpoolBase, SpoolUpdate, and by extension SpoolResponse). Pydantic silently strips unknown fields, so PATCH writes to /inventory/spools/{id} reached the update route's model_dump(exclude_unset=True) already missing the field, the setattr loop never touched the DB column, and GET responses left it out — the inventory table always showed "—" in the Storage Location column even when the user had typed and saved a value. Only the internal inventory was affected; Spoolman mode worked because it goes through a separate proxy backend with its own schema. Fix is two added fields in schemas/spool.py: one on SpoolBase (covers SpoolCreate + SpoolResponse via inheritance) and one on SpoolUpdate (standalone). Both constrained to max_length=255 to match the DB column's String(255). No route changes needed — the update handler at inventory.py:961 already uses the generic dump-then-setattr pattern that picks up any new schema field automatically. Note on UX intent: storage_location is the user-defined free-text label ("Drybox #1", "Top shelf"), distinct from location which is the AMS slot assignment ("AMS-A slot 3") — keeping both is the right call. Regression tests in test_spool_schemas_storage_location.py lock in: create/update accept the field, the response surfaces it, explicit-null clears via exclude_unset round-trip, omitted-on-PATCH is left untouched (doesn't accidentally clear), and max_length=255 is enforced (so the API returns a clean 422 instead of a SQLAlchemy column-length error).
  • Archives page didn't auto-refresh when a slicer sent a print to a Virtual Printer — the new card only appeared after switching tabs (#1282, reported by kleinwareio) — Real-printer prints broadcast archive_created over the WebSocket from main.py's MQTT print_start handler, and the Archives page listens for that event in frontend/src/hooks/useWebSocket.ts:241 to invalidate its react-query cache. The VP file-receive paths in backend/app/services/virtual_printer/manager.py (_archive_file for immediate mode and _add_to_print_queue for queue mode) created the archive and committed it to the DB but never broadcast the event — so the page stayed stale until the user clicked another tab and back, which triggered a refetch on focus. Fix: factored a small _broadcast_archive_created(archive) helper onto VirtualPrinterInstance that imports ws_manager lazily (matches the file's existing late-import convention for archive/queue imports) and emits the same {id, printer_id, filename, print_name, status} payload shape main.py uses. Called from both VP paths immediately after the archive is logged (_archive_file) and after the queue item is committed (_add_to_print_queue). Broadcast failures are swallowed at debug level so a transient WebSocket issue can't break the file-receive flow. The review mode path (_queue_file) is intentionally untouched — it creates a PendingUpload, not a PrintArchive, and renders on a different page. Tests: test_archive_file_broadcasts_archive_created and test_add_to_print_queue_broadcasts_archive_created patch ws_manager.send_archive_created and assert it's called once with the right payload shape. Affects: every Bambuddy install using a VP in immediate or print_queue mode; review mode and proxy mode are unaffected.
  • Virtual Printer wedged the slicer at "Downloading...(0%)" when a user clicked Print (instead of Send) against a non-proxy-mode VP, and blocked the next dispatch with "The printer is busy with another print job" (#1280, reported by kleinwareio) — Bambuddy's VP supports two distinct dispatch flows from the slicer: Send (file upload only — the path queue / immediate / review modes are designed for) and Print (file upload + start-print, intended for proxy mode where there's a real printer behind the VP). The reporter's setup was queue mode but they clicked Print, which is unsupported there. The user-facing symptom was wedging instead of a clean error: the FTP upload completed, the file landed in Bambuddy's queue, but Orca's UI froze at Downloading...(0%) and the next attempt was blocked. Cause: the VP's simulated state machine, in backend/app/services/virtual_printer/manager.py::on_file_received, jumped PREPARE → IDLE directly after the FTP upload completed. The Send flow doesn't watch the post-upload state, so Send users never noticed. The Print flow watches the gcode_state cycle expecting PREPARE → RUNNING → FINISH and only releases its in-flight-job lock when it sees FINISH (or FAILED). Going PREPARE → IDLE looks to the Print-flow slicer like "printer abandoned my job without confirming completion" → UI keeps the prior job pinned → next dispatch is blocked. gcode_file_prepare_percent also stayed at "0" for the whole upload window, which is why Orca's "Downloading X%" progress bar never advanced. Fix: on_file_received now transitions PREPARE → FINISH with prepare_percent="100" and the just-completed filename. The VP's 1-Hz periodic status push (mqtt_server.py:363) broadcasts the new state to every connected slicer within a second, so Orca clears its lock and the next dispatch goes through. The transition is gated to .3mf uploads only — auxiliary uploads (printer-side .gcode blobs etc.) leave the visible state alone. Treats Print and Send identically in non-proxy modes — Print is now silently handled as "file received, treat as completed" instead of wedging the slicer. Send remains a no-op behavior change because Send doesn't watch the post-upload state. Tests: 2 new tests in backend/tests/unit/services/test_virtual_printer.py pin (1) the FINISH transition with the correct filename + prepare_percent="100", and (2) the non-3MF guard. Affects every VP mode that isn't proxy (immediate, print_queue, review) on every slicer using the Print flow (BambuStudio + OrcaSlicer in LAN-mode).
  • External-spool filament selection silently rolled back: every "Generic PLA" / preset change for the external slot looked applied in the UI but failed on the printer, and the next print threw "no mapping" (#1279, reported by kleinwareio) — Repro: P1S, no AMS, vt_tray active. User picks any filament for the external slot via Bambuddy. The UI looked normal, but the printer's MQTT response was {"command":"ams_filament_setting", "result":"fail", "reason":"error string"}. The companion extrusion_cali_sel command succeeded, so the K-profile stuck but the filament identity didn't — and the next print therefore had nothing to map to. Cause: backend/app/services/bambu_mqtt.py::ams_set_filament_setting encoded the single-external-spool case as {ams_id: 255, tray_id: 0, slot_id: 0}. The "LOCAL tray_id = 0" comment in the code was a misread of the printer's response shape (the printer echoes tray_id: 0 as the slot-within-virtual-unit, not the slot index used in the request). Verification: captured BambuStudio → X1C ams_filament_setting publish via mosquitto-compatible paho-mqtt subscriber on the same broker, BambuStudio set the external slot to a PLA preset, the published REQ was {ams_id: 255, tray_id: 254, slot_id: 0, tray_info_idx: "P4d64437", tray_color: "F72323FF", tray_type: "PLA", ...} and the printer's REP returned result: "success". The on-wire convention for ams_filament_setting on the external spool is therefore the global tray index (tray_id: 254), not a local slot number (tray_id: 0). Fix: mqtt_tray_id = 254 for the single-external branch in both ams_set_filament_setting and reset_ams_slot (which shares the convention). The dual-external branch (H2D, len(vt_tray) > 1) was not in the captured exchange and is left at mqtt_tray_id = 0 until a Studio → H2D capture confirms the correct value — a regression test pins the current dual-external encoding so any future change to that branch surfaces immediately. Affected printers: every printer whose MQTT push reports vt_tray as a single-element list — i.e. one external slot. That covers all single-nozzle Bambu printers (P1P, P1S, A1, A1 mini, X1C, X1E) plus dual-nozzle models that use a single external feed (X2D). Not affected by this change: H2D / H2C / H2S, which expose two external slots and go through a separate len(vt_tray) > 1 branch. That branch is preserved at its existing mqtt_tray_id = 0 encoding because the captured exchange did not cover it; if the same misencoding turns out to affect dual-external too, a Studio → H2D capture will surface the right values and a follow-up patch will land. Known asymmetry not touched in this PR: the inline ams_filament_setting built by _probe_developer_mode (bambu_mqtt.py:2971-2985) still hardcodes tray_id=0. The probe is robust to this — its detection logic only matches reason: "verify failed" so it correctly identifies dev-mode regardless of whether the command itself succeeds — but the two builders should be unified in a follow-up. Tests: 5 new tests in backend/tests/unit/services/test_bambu_mqtt.py::TestAmsFilamentSettingExternalSpoolEncoding pin the X1C/P1S/A1 single-external fix, reset_ams_slot symmetry, regular AMS slot encoding unchanged, AMS-HT slot encoding unchanged, and the explicitly-unverified dual-external encoding (so any future change to the dual branch surfaces in diff review).
  • Scan For Timelapse matched the wrong video when an older print's filename happened to land near a later archive's completion (#1278, reported by 1000Delta) — Repro: P2S in LAN-Only mode (no NTP, so printer clock is drifted +8h from UTC), two prints on the same day. Archive 1 correctly attached video_2026-05-08_09-41-29.mp4. Archive 2 (started at 16:39:09 UTC, expected video_2026-05-09_00-42-42.mp4) reused Archive 1's video with a misleading diff: 0:02:19. Cause: scan_timelapse's Strategy 2 matcher in backend/app/api/routes/archives.py had two compounding flaws. (1) It compared the filename timestamp against both archive.started_at and archive.completed_at with a 48 h tolerance — but the filename always represents the print's START time, never its end, so the end-time branch was a semantic mistake whose only effect was creating false positives. For Archive 2, the stale filename 09:41:29 shifted by hypothesis offset -8h17:41:29, which happened to fall ~2 minutes before Archive 2's completion → "diff" 2m19s won. (2) The matcher tried seven hypothesised offsets [0, ±1, ±7, ±8], which densely covers a wide span of the day. Even with the end-time branch removed, the wrong video at offset -7 lands at 16:41:29 → 2m20s from Archive 2's start, beating the correct video's 3m33s at offset +8. Fix: extracted Strategy 2 into a pure _match_timelapse_by_timestamp(video_files, archive_start) helper that (a) only compares against print start time (end-time evidence is handled separately by Strategy 3 via file mtime, which actually does reflect when writing finished), and (b) requires the best (video, offset) pair to beat the next-best pair from a different video by at least 15 minutes. When the top two candidates from different videos are too close to call, the helper returns None so the route surfaces the existing available_files list and the frontend's manual-selection dialog kicks in — which is the fallback the reporter explicitly asked for ("at a minimum, we should support that can fall back to letting the user manually select"). Wide offset support is preserved so EU / JST / AEST users (offsets +1, +7, +9, +10, etc.) still get auto-match when there's no ambiguity. Tests: 17 new tests in backend/tests/unit/test_timelapse_match.py pin the bug case (test_issue_1278_archive2_refuses_to_auto_pick_ambiguous, test_issue_1278_archive1_still_matches_unambiguously), the resolution path once the stale video is cleaned up (test_archive2_resolves_when_stale_video_removed), each of the 7 supported offsets via parametrize, and the supporting invariants (no started_atNone, non-timestamp filenames are skipped, same-video different-offset is not ambiguous, well-separated different videos still auto-pick). Known UX gap not in this PR: if the matcher auto-picks a wrong match, the user must delete the attached timelapse first before re-scanning — scan_timelapse short-circuits with status: "exists" when timelapse_path is already set. Adding a force-rescan or "wrong match, pick from candidates" affordance is a separate change.
  • Docker image: pip upgraded to >=26.1 to close CVE-2026-6357 (medium) — The python:3.13-slim-trixie base image ships pip 26.0.1, which runs its self-update check after installing wheels. A hostile wheel that included a module named like a deferred stdlib import (urllib, ssl, …) could therefore hijack imports inside the just-finished install step. The exploit path is theoretical for Bambuddy itself — we don't install user-supplied wheels at runtime — but the vulnerable pip version still ships inside the image, GitHub code-scanning flagged it (alert #778), and any downstream user who pip installs into the running container inherits the issue. Fix: Dockerfile now runs pip install --upgrade 'pip>=26.1' immediately before pip install -r requirements.txt, so the requirements install itself happens under the patched pip and the resulting pip-*.dist-info/METADATA Trivy reads from the layer is the fixed version. No requirements.txt change — the floor is enforced at the image-build layer where the vulnerable copy lived. (libexpat1 alert #795 also flagged by code-scanning is a DoS-only XML attribute-collision CVE with no patched Debian trixie package yet — left open as a tracking signal; next base-image rebuild after trixie ships libexpat 2.8.1 will close it automatically.)
  • Gitea backups silently failed after the first run; Forgejo v15 token-scope quirk broke "Test Connection"; many failure paths surfaced cryptic one-word errors (#1224 reported by rtadams89, #1239 + PR #1255 by BurntOutHylian) — Two intertwined problem clusters on the Git-backup path, fixed as one PR. (1) Gitea backups quietly stopped after run #1. The Git backup service used GitHub's Git Data API (POST /git/blobs/trees/commitsPATCH /refs) for every push. Gitea does not implement these write endpoints on modern versions, so every blob POST returned 404; the loop's continue-on-non-201 pattern left the change list empty and the route returned {"status": "skipped"} instead of committing — no toast, no log row, just "no changes" forever. The first run only worked because the empty-repo path already used the Contents API. Fix: GiteaBackend.push_files is overridden to use POST /repos/{owner}/{repo}/contents with a files array — every changed file is sent as operation: "update" (with its current blob SHA) or operation: "create", the whole batch commits in a single round-trip, no partial-commit failure mode possible. _create_branch_and_push switched from the unimplemented POST /git/refs to POST /branches with {new_branch_name, old_ref_name}. (2) Forgejo v15+ returns 404 (not 403) for private repos when the token lacks repository scope, indistinguishable on the wire from "repo not found / token typo" — Test Connection's existing 404 branch said "Repository not found", which sent users chasing the wrong cause. Fix: new ForgejoBackend (inherits GiteaBackend) overrides test_connection to GET /user first; 401 = bad token, 403 = zero-scope token ("read:user scope missing"), 404 on the subsequent /repos/ call surfaces the v15-specific "private repo with scope mismatch" hint instead of the generic message. Hardening pass on the broader backup stack (B18–B26 review round): every response.json()[...] indexing in github.py (9 sites: ref/commit/blob/tree/commit/ref across push_files + _create_branch_and_push + _create_initial_commit) now routes through a new base.py::_read_sha(response, *path) helper that returns (sha, error_reason) — a malformed body no longer bubbles KeyError('object') through the catch-all to surface as the cryptic one-word string "'object'" in last_backup_message. Tree-fetch failures (GitHub side, mirroring the Gitea side) now return failed with status code + truncated body instead of letting existing_files silently stay empty (which forced every file to re-upload and produced a downstream 422 with no hint at the real cause). GitHub's _create_branch_and_push failure message includes the HTTP status code (an empty-body 422 now produces a diagnostic message instead of "Failed to create branch: "). Both backends detect truncated: true on the tree-listing response (GitHub's tree API truncates at >7MB / >100k entries) and fail loudly asking the operator to rotate the backup repo — previously a truncated listing made the SHA-equality dedup miss and silently re-uploaded every file each run. test_connection failure messages now include str(e)[:200] alongside the exception class name, so the UI surfaces "Connection failed: ConnectError: certificate verify failed: hostname mismatch" instead of just "ConnectError". Gitea's 409-on-/contents message was softened from "stale blob SHAs" (one possible cause) to "the branch likely advanced concurrently (web-UI edit, another backup run, or path-vs-tree collision)". Every status-code branch in github.py and gitea.py mid-push now emits a logger.warning with owner/repo context (previously only the outer except logged, so a 403/404/422 left a DB row with no application-log entry). Recursive push_files re-entry after branch create now logs "Re-entering push_files after branch create owner/repo -> branch" at info level so replication-lag second-pass failures are debuggable. Tests: +17 new unit tests in test_git_providers.py covering the GitHub robustness paths (tree-fetch failure, truncated tree, malformed JSON for ref/commit/blob, 403/422 on _create_branch_and_push), the Gitea round-2 hardening (truncated tree, status code in get_current_commit / extract_tree_SHA / get_repo_info failures, log marker emission), and the Forgejo connection-failure detail. Existing 86 → 103 tests, all pass; full backend suite + integration backup tests green; ruff clean. Tested by BurntOutHylian against Gitea 1.24.7 / 1.25.4 / 1.26.1 and Forgejo v11 / v15 LTS. Companion wiki update at maziggy/bambuddy-wiki#28.
  • Printer card's "Show on Printer Card" smart-plug button toggled power without confirmation (#1260, reported by thkl) — Smart plugs with the "Show on Printer Card" option enabled appear as a clickable chip in the printer card's HA-entities row (below the main Smart Plug controls). One click cut power to the printer instantly — including mid-print — even though the main Off button next to it already routes through a ConfirmModal and shows an additional running-print warning. Fix: the HA-row click handler in frontend/src/pages/PrintersPage.tsx now branches on entity type — script.* entities keep firing instantly (a script is a fire-once trigger, not a power switch, and the existing semantic of "Run" matches user expectation), but switch/light/anything-else entities now open a new ConfirmModal first. The modal reuses the same variant="danger" + running-print warning shape as the existing power-off confirmation: when status?.state === 'RUNNING' it shows the "WARNING: is currently printing! Toggling may cut power and interrupt the print" copy, and renders the default-variant "Toggle the Home Assistant entity ?" message otherwise. The entity name comes from ha_entity_id (with name fallback) so the modal disambiguates which of multiple plugs the click was on. i18n: new printers.confirm.{haToggleTitle, haToggleMessage, haToggleWarning, haToggleButton} keys added across all 8 locales (en + de + fr + it + ja + pt-BR + zh-CN + zh-TW translated to native, no English-fallback seeding). Full PrintersPage frontend suite (49 tests) still passes; build clean.
  • X2D / H2D dual-nozzle without AMS: filament mapping reported "Required filament type not found in printer" even when the spools were physically loaded (#1257) — Repro: X2D with 0 AMS units, two external spools (Ext-L feeding left extruder, Ext-R feeding right), print job specifies nozzle_id per filament. The Schedule Print modal showed the orange "Filament Mapping (Type not found)" header and a forced manual slot picker, even though the matching PETG was sitting right there in the external spool holder. Cause: frontend/src/hooks/useFilamentMapping.ts:18-19 derived dual-nozzle status solely from printerStatus.ams_extruder_map being non-empty. That map is populated from AMS units' info bits, so a dual-nozzle printer with zero AMS units gets an empty map → hasDualNozzle = false → external spools' extruderId falls through to undefined (line 64 ternary fallback). The downstream nozzle-aware filter at lines 117 / 377 (available.filter((f) => f.extruderId === req.nozzle_id)) then rejected every loaded filament because undefined !== 0/1 for any non-null nozzle_id. The PETG was loaded, just incorrectly stripped from the candidate set during matching. Fix: widen the dual-nozzle inference to three independent signals OR'd together: (1) nozzles[1].nozzle_diameter populated — the most direct signal, set by bambu_mqtt.py:2619-2621 only when the printer reports a right_nozzle_diameter MQTT field, so a populated value always implies real second-nozzle hardware; (2) ams_extruder_map non-empty — preserved as fallback for the dual-nozzle-with-AMS case the original code already handled; (3) vt_tray.length > 1 — single-nozzle printers (P1S / A1 / X1C) only have one external feed, so multiple external trays only exist on dual-nozzle hardware. The first signal alone is not sufficient because the backend state.nozzles defaults to a 2-entry list with empty NozzleInfo() stubs (bambu_mqtt.py:160) on every printer, single-nozzle included — nozzles.length would always be 2 on the wire and would have regressed every single-nozzle install. Affects all dual-nozzle printers running without AMS: X2D, H2D, X2 Pro. Tests: two new regressions in src/__tests__/hooks/useFilamentMapping.test.ts. matches external spools per-extruder on dual-nozzle without AMS pins the bug fix — asserts each external spool gets the correct extruderId (1 for Ext-L id=254, 0 for Ext-R id=255) and computeAmsMapping picks Ext-L for a left-nozzle requirement. does not fabricate extruderId for single-nozzle with stub nozzles[1] is the matching guard — asserts that a P1S / A1 / X1C-shape PrinterStatus (with the default-stub second nozzle entry the backend always emits) does NOT trip the dual-nozzle inference, so single-nozzle external spools keep extruderId=undefined exactly as they did pre-fix. Together they pin both directions: a future change that re-breaks the X2D path fails CI, and one that mistakenly turns single-nozzle printers into dual-nozzle also fails CI. Full frontend suite (1891 tests across 138 files) green.
  • GCode Viewer had no in-app way to navigate back — the only exit was the browser's back button — Opening the GCode Viewer from a File Manager card or an Archive card calls navigate('/gcode-viewer?archive=…' | '?library_file=…'), which mounts GCodeViewerPage as a full-height iframe inside the Layout shell. The page rendered nothing but the iframe, so once the third-party viewer's UI took over the content area there was no in-app affordance to return to the originating list — only the browser's back button. Reported by maziggy. Fix: added a thin back bar above the iframe in frontend/src/pages/GCodeViewerPage.tsx with an ArrowLeft icon button. The button label adapts to the entry point — Back to Print Archives when the URL carries ?archive=, Back to File Manager when it carries ?library_file=, generic Back otherwise (covers the rare deep-link / shared-URL case). Click prefers navigate(-1) so the user lands back in their original list with scroll position and filters preserved; falls back to /archives or /files when the page was opened in a fresh tab and there's no SPA history to return to. Iframe height is now flex: 1 inside a flex column under the bar instead of a hard-coded calc(100vh - 3.5rem) — the layout's existing fixed-header offset is unchanged, only the back bar (~36 px) is subtracted from the viewer's vertical real estate. i18n: new gcodeViewer.{back,backToArchives,backToFiles} namespace added to all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native using each locale's existing page-title vocabulary — Druckarchiv/Dateimanager, Archives d'impression/Gestionnaire de fichiers, Archivi di stampa/Gestore file, 印刷アーカイブ/ファイル管理, Arquivos de impressão/Gerenciador de arquivos, 打印归档/文件管理器, 列印歸檔/檔案管理器).
  • Archives card's "Reprint" / "Schedule" / "Slice" button labels truncated to "Re..." / "Sc..." on narrow browser windows (#1249) — The action row on each archive card has six buttons: two labelled (Reprint + Schedule, or Slice when the file isn't sliced yet) plus four icon-only utilities (open in slicer, external link, globe, download, trash). The labelled buttons used flex-1 to share whatever space remained after the four fixed-width icon buttons, with the label rendered as <span className="hidden sm:inline truncate">...</span> — i.e. visible at any viewport ≥ 640px, with truncate ellipsizing when there isn't room. The Tailwind viewport breakpoint can't see the card width. The page's grid grows column count alongside viewport (md:grid-cols-2 lg:grid-cols-3 xl:grid-cols-4), so cards stay roughly 320–380 px wide across breakpoints and the leftover ~30 px in each labelled button isn't enough for "Reprint", which lands on screen as "Re..." — repro'd from a small browser window in the reporter's case. Fix: breakpoint bumped from hidden sm:inlinehidden xl:inline on all three labelled buttons (Reprint at line 1106, Schedule at line 1117, Slice at line 1153 of frontend/src/pages/ArchivesPage.tsx). Labels now appear only at viewport ≥ 1280px where the cards (3-4 columns of ~320 px) actually have headroom for them; on narrow windows the buttons render icon-only with their existing title= tooltip kept intact for hover and assistive-tech disclosure. Trade-off accepted: a wide-viewport-with-wide-sidebar setup that compresses the card to under ~320px will still see the truncation, but that's a corner case — the common "small browser window" path is fixed without restructuring the row.
  • Spool form's "Slicer Preset" dropdown silently dropped Local Profiles when Bambu Cloud was connected, and collapsed per-printer/per-nozzle variants of cloud and local presets into a single entry (#1248, reported by andretietz) — Two distinct defects in the same code path. Defect 1 (the reported bug): buildFilamentOptions in frontend/src/components/spool-form/utils.ts was precedence-based — if (cloudPresets.length > 0) returned the cloud list and never reached the local-presets branch, so any Local Profile imported via Profiles → Local Profiles was silently invisible whenever the user was logged into Bambu Cloud (the same profile rendered fine with a green Local badge in the AMS Slot configuration modal). The wiki documents the dropdown as "merged and deduplicated" across cloud + local + built-in. Defect 2 (surfaced during fix verification): the spool form was collapsing all Bambu Lab P1S 0.4 nozzle / Bambu Lab X1C 0.4 nozzle / Bambu Lab A1 0.4 nozzle variants of "Bambu PLA Basic" into a single dropdown entry by stripping the printer suffix and dedup'ing by base name (one Map.set per family for cloud defaults, one per family for local presets). The AMS Slot modal lists each variant individually and filters by the active printer model, so the user observed strictly more entries in the AMS Slot than in the Add Spool modal even after the merge fix. The right semantic for the spool form — printer-agnostic by design, since a spool isn't bound to a printer — is to show every variant as its own row, exactly as if you'd summed the AMS Slot's per-printer-filtered output across all printers. Fix: rewrote buildFilamentOptions to (a) actually merge all three sources, dropping the precedence early-return, and (b) push each cloud setting_id and each LocalPreset row as its own FilamentOption instead of collapsing by name.replace(/.*$/, ''). displayName now keeps the full printer 0.4 nozzle suffix so users can pick the right variant. Built-in dedup against cloud setting_id is preserved (mirrors ConfigureAmsSlotModal.tsx:498 exactly). Wired api.getBuiltinFilaments() into both callers — SpoolFormModal and SpoolBuddyWriteTagPage. Persistence safety: the saved slicer_filament shape is unchanged — cloud picks still persist their setting_id, local picks still persist preset.filament_type || String(preset.id) (consumed by backend/app/utils/filament_ids.py::normalize_slicer_filament which expects GFL05/GFSL05 shapes; persisting the bare LocalPreset row id would break slicing). Local-preset allCodes now carries both the filament_type form and the String(preset.id) form so findPresetOption resolves both old (pre-fix) and new picks. React-key collision: with collapse removed, two LocalPreset rows can share the same code if they share filament_type; the dropdown key in FilamentSection.tsx is now composed ${option.code}::${option.name} to stay unique. Tests: new frontend/src/__tests__/components/spool-form/buildFilamentOptions.test.ts with 9 cases — the #1248 regression case, "one entry per cloud setting_id, no printer collapse", "list each local preset individually", "printer suffix preserved in displayName", local allCodes carrying both shapes, the GFA00GFSA00 built-in dedup, the all-empty fallback, and the alphabetical sort. The two existing vi.mock('../../api/client') blocks in SpoolFormModal.test.tsx and SpoolFormBulk.test.tsx were updated with the new getBuiltinFilaments stub.
  • SpoolBuddy install.sh re-run failed with Permission denied on root-owned files in update modedownload_spoolbuddy() ran git fetch + git checkout + git reset --hard before the post-install chown at the end of the function. If a previous install left stray root-owned files in the tree (e.g. static/assets/* written by an earlier sudo run, or a frontend build that wrote as root), the git reset --hard step aborted with EACCES on the unlink/replace step before reaching the chown. The script then exited and the kiosk's underlying ownership problem persisted, so the next attempt would fail the same way. Fix: pre-emptively chown -R spoolbuddy:spoolbuddy "$INSTALL_PATH" in the update branch before any git operation runs. The script already runs as root (enforced by check_root), so the chown is always safe. The existing post-install chown at the end stays — it now mostly catches new files created during this run that need their ownership normalised. Same root cause showed up on the kiosk's runtime SSH update path (Bambuddy → kiosk: git checkout dev && git reset --hard origin/dev running as the spoolbuddy user) but that path can't chown without sudoers expansion — the install.sh fix is the immediate recovery, and re-running the install script restores a clean ownership baseline that the runtime updater can keep healthy thereafter.
  • SpoolBuddy SSH update aborted with TypeError: startswith first arg must be bytes or a tuple of bytes, not str after the host-key store succeededperform_ssh_update calls asyncssh.import_known_hosts(...) to materialise an SSHKnownHosts object for _run_ssh_command's known_hosts= keyword arg. Both call sites (the stored-key path at line 221 and the just-stored TOFU re-parse at line 272) passed f"{ip} {key}\n".encode() — i.e. bytes. asyncssh's parser does line-based string operations (line.startswith('#') with a str literal), so any bytes input crashes inside its loader with TypeError. The two try/except clauses caught only (ValueError, asyncssh.Error), missing TypeError, so the crash bubbled up and aborted the whole update right after the schema fix successfully persisted the host key. Fix: drop the .encode() at both call sites — pass the str directly. Widened both except clauses to (ValueError, TypeError, asyncssh.Error) so any future asyncssh API surprise degrades to the existing fallback (TOFU mode without host-key verification, with a logger.warning) instead of crashing the update. Existing SSH tests all mocked asyncssh.import_known_hosts itself so they never reached the parser — added test_perform_ssh_update_passes_str_not_bytes_to_import_known_hosts to capture both call sites' arguments and assert isinstance(arg, str) so re-introducing .encode() fails CI immediately.
  • SpoolBuddy SSH update crashed on Postgres with value too long for type character varying(500) when storing the device's RSA host keyspoolbuddy_devices.ssh_host_key was declared as String(500), which is fine for SQLite (ignores VARCHAR length) and for ed25519 host keys (~120 chars), but RSA host keys in OpenSSH format are typically 370 chars (2048-bit) → 544 chars (3072-bit) → ~720 chars (4096-bit). Postgres enforces the limit strictly, so any kiosk reporting an RSA-3072 or larger host key on the first SSH update aborted at the UPDATE spoolbuddy_devices SET ssh_host_key=... flush — the git fetch + pip install + systemctl restart may have run successfully but the persistence of the TOFU host key failed and the device's update_status was never written. Fix: widened ssh_host_key from String(500)Text on the model, plus an idempotent ALTER TABLE spoolbuddy_devices ALTER COLUMN ssh_host_key TYPE TEXT migration gated on not is_sqlite() (Postgres-only; SQLite is a no-op since it doesn't enforce VARCHAR length). Existing rows are preserved — TYPE TEXT is a metadata-only change on Postgres for VARCHAR(N)TEXT so it's a fast migration even on populated tables. Originally introduced in the H1 SSH-host-key TOFU security fix; the 500-char floor was a guess based on ed25519 sizes that the RSA case immediately blew past.
  • SpoolBuddy kiosk Settings → Update button returned "API keys cannot be used for administrative operations" — Same root cause as the four QuickMenu System buttons fixed in 0.2.4b3 (Restart Daemon / Restart Browser / Reboot / Shutdown), missed in that audit. The POST /spoolbuddy/devices/{id}/update route (kiosk's own Settings → Update Daemon button → SSH update on the kiosk device) was gated on Permission.SETTINGS_UPDATE, but SETTINGS_UPDATE is on the API-key deny-list (_APIKEY_DENIED_PERMISSIONS in backend/app/core/auth.py, introduced in PR #1241). Every kiosk-side request to update the daemon — regardless of the API key's scope set (Read / Print Queue / Control / Legacy) — tripped the deny-list and returned a hard 403 with that message. The 0.2.4b3 fix explicitly carved /update out with the reasoning "replaces the daemon binary, different threat surface" — but that reasoning was wrong: restart_daemon already replaces the running daemon process, so daemon-replacement is not a step up in blast radius. The SSH update is also strictly scoped to the single device the operator physically controls (git fetch + pip install + systemctl restart on that one host) — same threat profile as the system commands already running on INVENTORY_UPDATE. Fix: lower /spoolbuddy/devices/{id}/update from Permission.SETTINGS_UPDATEPermission.INVENTORY_UPDATE, matching the rest of the kiosk-scoped routes (calibration/tare, display, cancel-write, system/command, system/command-result, update-status). The main Bambuddy in-app updater at POST /api/v1/updates/apply keeps SETTINGS_UPDATE — that one operates on the Bambuddy host and is correctly fenced behind the deny-list. Tests: test_trigger_update_requires_settings_update (which pinned the broken behavior — 403 on inventory-only key) is renamed to test_trigger_update_accepts_inventory_update and now asserts the inventory-only key reaches the device-state check (409 offline) instead of 403, so a future re-tightening of the gate surfaces immediately. Class-level docstring in test_settings_api_key_scrubbing.py updated to reflect the corrected threat-model reasoning.
  • Printer file download 500'd on non-ASCII filenames; same crash latent in three sibling endpoints (#1245, reported by 1000Delta) — GET /api/v1/printers/{id}/files/download?path=... raised UnicodeEncodeError: 'latin-1' codec can't encode characters in position … for any path whose filename carried non-ASCII characters (Chinese, Japanese, Arabic, accented Latin), reproducible against P2S firmware on macOS but not target-specific. Cause: the route shoved filename straight into Content-Disposition: attachment; filename="{filename}" — Starlette/uvicorn encodes response headers as latin-1, so anything outside U+0000..U+00FF crashed at write-time. Same pattern existed in three sibling endpoints reachable with user-controlled non-ASCII input: GET /archives/{id}/qr (uses archive.print_name from 3MF metadata, often non-ASCII), GET /projects/{id}/export (uses project.name — the existing sanitiser at projects.py:1648 uses c.isalnum() which passes non-ASCII Unicode through, so the crash propagated), and _stream_pdf in labels.py (latent — current callers pass ASCII-only template names, but the same shape would crash if a future caller passed user input). Fix: new helper backend/app/utils/http.py::build_content_disposition(filename, disposition="attachment") returns an RFC 6266-compliant header with both an ASCII-stripped legacy filename="..." fallback and an RFC 5987 filename*=UTF-8''<percent-encoded> parameter — every modern browser (Chrome / Firefox / Safari / Edge) prefers the *= form when present, so the original filename round-trips intact through Save-As; the ASCII fallback covers IE10-era clients. Helper wired in at all four call sites in one PR (per project rule: no deferred follow-ups). Tests: 20 unit tests in test_http_utils.py pinning ASCII-fallback rules across plain ASCII / Chinese / Japanese / Arabic / French diacritics / .gcode.3mf double-extension / quote-injection / backslash-injection / empty-string and ___.zip edge cases, asserting the helper's output round-trips through latin-1 (the crash condition) for every test input. 6 new integration tests in test_printers_api.py::TestPrintersAPI::test_download_printer_file_non_ascii_filename parametrized over the same character classes (the original 龙泡泡石墩子_p2s_ok.gcode.3mf case from #1245 is included) — each asserts the route returns 200 with an unmangled body, the ASCII fallback in the header matches expectations, and unquote(filename*=) round-trips back to the original Unicode filename. Thanks to 1000Delta for the diagnosis and the proof-of-concept patch on printers.py — the broader audit (three sibling endpoints, helper extraction, latin-1 round-trip assertions) was done on top of that.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.