github maziggy/bambuddy v0.2.5b1-daily.20260515
Daily Beta Build v0.2.5b1-daily.20260515

pre-release3 hours ago

Note

This is a daily beta build (2026-05-15). It contains the latest fixes and improvements but may have undiscovered issues.

Docker users: Update by pulling the new image:

docker pull ghcr.io/maziggy/bambuddy:daily

or

docker pull maziggy/bambuddy:daily


**Tip:** Use [Watchtower](https://containrrr.dev/watchtower/) to automatically update when new daily builds are pushed.

Changed

  • Support bundle audited for new features — adds OIDC, 2FA, API keys, library/inventory/queue/maintenance totals, slicer-API reachability, GitHub backup status, per-printer Obico flag; also redacts two settings that were leaking and fixes a reachability-check architecture bug — The support-info.json block in support bundles auto-includes the settings table (with sensitive-key redaction), so settings-stored features like LDAP, Obico globals, integrated slicing URLs, Tailscale, and queue-drying already flowed through. What was missing was anything stored in dedicated tables, which had grown substantially without the bundle being updated. Triaging the recent OIDC / 2FA / group bugs (#1292, #1297) and the X1C slicer investigation involved repeatedly asking reporters for information that should have been in the bundle. New blocks added to _collect_support_info in backend/app/api/routes/support.py: auth — OIDC providers (cleartext name, is_enabled, scopes, email_claim, require_email_verified, auto_create_users, auto_link_existing_accounts, has_default_group, has_icon, linked_user_count; client_id/client_secret/issuer_url stay out of the bundle), 2FA counts (users_with_totp, email_otp_codes_pending), API key counts (total / enabled / expired), long-lived token counts (total / active), group counts (system / custom). librarylibrary_files_total, library_files_in_trash, library_folders_total, external_folders_total, external_links_total, makerworld_imports_total. inventoryspools_internal, k_profiles_internal, k_profiles_spoolman. queuepending_total, manual_start_pending, oldest_pending_age_seconds (catches items stuck because their target printer is offline or filament doesn't match). maintenanceitems_total, items_enabled. integrations.github_backupconfigs_total, providers_used dict (github/gitea/forgejo/gitlab), schedule_enabled_count, last_failure_count. integrations.slicer_apienabled, preferred, bambu_studio_url_set, orcaslicer_url_set, plus an actual 2-second HTTP reachability ping (bambu_studio_reachable, orcaslicer_reachable) to differentiate "URL empty" from "URL misconfigured" from "service down". Per-printer obico_enabled flag added to each entry in printers[], parsed from obico_enabled_printers setting via a new _parse_obico_enabled_printers helper that tolerates legacy comma-separated formats. Plus three smaller but important fixes caught while testing the bundle against a real instance: (1) mqtt_broker value was leaking — the keyword-substring redaction filter at support.py:850 had no entry that matched the mqtt_broker setting name, so the broker IP (e.g. 192.168.255.16) was appearing in cleartext. Added broker to sensitive_keys. (2) virtual_printer_tailscale_auth_key was leaking — same reason, no keyword in the filter matched _auth_key. Added auth_key to the keyword set, AND added a value-prefix safety net (tskey-) so any FUTURE Tailscale setting with an unexpected name still auto-redacts when its value starts with the Tailscale auth-key prefix. (3) Slicer-API reachability check was always returning null / false even when the slicer was up — two root causes stacked. First, the old code passed info["settings"] (already redacted) into _collect_slicer_api_info, so when bambu_studio_api_url had been redacted to "[REDACTED]", the httpx call hit that literal string and crashed; when the setting was empty, the URL came through as "" and the function returned None. Second — caught on the next round of testing — even after switching to read directly from Settings.value, the check only looked at the DB row, but the real slicer routes (archives.py:3174-3180, library.py) resolve the URL with a three-level precedence: DB setting → app_settings.bambu_studio_api_url (which reads the BAMBU_STUDIO_API_URL env var) → built-in default http://localhost:3001. Most installations run the sidecar on the default port or via env var, so the DB-only check returned null even when the slicer was up and reachable. The collector now mirrors the route's exact resolution path. The block now also reports bambu_studio_url_set_in_db: bool and bambu_studio_url_source: "db" | "env_or_default" | "unset" so triage can see WHICH layer supplied the URL — separates "user explicitly configured it" from "they're using the default port" without leaking the URL itself. Two regression tests pin both layers: test_reachability_uses_unredacted_url (no "[REDACTED]" ever reaches _check_url_reachable) and test_env_var_fallback_url_pinged_when_db_setting_empty (DB empty + env-var-set URL is actually pinged and reported reachable). All new collectors are wrapped in try/except so a single failure on one block can't blank the rest of the bundle. OIDC provider names are passed in cleartext deliberately — they're login-button labels (PocketID, Authentik, Google, etc.), not secrets, and provider-specific behavior (Azure handles claims differently from Authentik) is exactly the kind of detail that makes SSO bugs triagable in one round-trip instead of three. 13 new unit tests in backend/tests/unit/test_support_helpers.py cover the obico-parser edge cases, slicer-API reachability with mocked httpx (including the "404 = reachable" decision, the un-redacted-URL regression, AND the env-var-fallback regression), auth-info OIDC-cleartext-but-no-secrets contract, the GitHub-backup provider/failure aggregation, and the new mqtt_broker / virtual_printer_tailscale_auth_key / value-prefix-based redactions.
  • Page headers unified across the app: consistent icon size, placement, and subtitle styling (PR #1272 by EdwardChamberlain, continuation of #1060 / #1203) — Nine pages (Archives, FileManager, Inventory, Maintenance, MakerWorld, Profiles, Projects, Settings, Stats) now share one header pattern: w-7 h-7 bambu-green icon next to a text-2xl font-bold title with a text-bambu-gray mt-1 subtitle underneath, matching the look that landed earlier on Print Queue and Printers. FileManager and Projects dropped their rounded bg-bambu-green/10 rounded-xl p-2.5 icon tile in favor of the plain icon to match the rest. The sidebar's "Queue" nav item is renamed to "Print Queue" (and its icon switched from Calendar to ListOrdered) to match the page header it leads to. The Stats page title is renamed Dashboard → Statistics to match the sidebar nav label that's been pointing at it (the page never was the printer dashboard — Printers is — and the mismatch confused new users; closes a small but recurring source of "where's the dashboard?" support questions). All renames flow through every locale: en/de/fr/it/ja/pt-BR/zh-CN/zh-TW updated for nav.queue, stats.title, plus a new inventory.subtitle key ("Manage your spools" + translations) used by the inventory header. Bonus on top of the stated scope: inventory.toolbar.{filters, view, actions} were untranslated English strings in fr/it/ja/pt-BR/zh-CN/zh-TW — Edward translated them properly in the same pass. StatsPage.test.tsx updated to assert the new "Statistics" title. Build clean, all 35 page tests still pass, i18n parity holds at 4753 leaves across all 8 locales. Maintenance page subtitle keeps its red / amber / green severity color on the "X items due · Y warnings · all up to date" line — the colors carry actual at-a-glance status information, not just visual weight.
  • Bambuddy now identifies honestly as itself on every outbound request to Bambu Lab / MakerWorld / Bambu Wiki — proactive alignment with Bambu Lab's 2026-05-12 statement on cloud access, which draws a clear line between modifying AGPL code (allowed) and "impersonating official clients in communication with our cloud infrastructure" (not allowed). Bambuddy was already on the right side of that line on the main authenticated cloud path (User-Agent: Bambuddy/1.0 in bambu_cloud.py:_get_headers), but three secondary call sites were sending browser User-Agents — originally added under the assumption Cloudflare's WAF would block non-browser identification. Tested on 2026-05-12 with curl -H "User-Agent: Bambuddy/1.0" against all three: https://bambulab.com/api/sign-in/tfa returned HTTP 400 with the expected application-level {"code":5,"error":"Login failed"} JSON (no Cloudflare interstitial), https://api.bambulab.com/v1/iot-service/api/slicer/setting returned HTTP 200 with the full 576 KB settings response, https://makerworld.com/api/v1/design-service/* returned the same response shape as a Firefox UA, and https://wiki.bambulab.com/* served identical HTML to a Chrome UA. The browser-impersonation was unnecessary. All four call sites now send Bambuddy/1.0 (+https://github.com/maziggy/bambuddy) consistently — the URL in parens makes the source unambiguous so Bambu can distinguish our traffic from impersonators if they ever audit it. Files: bambu_cloud.py (TOTP/TFA path no longer spoofs Chrome UA + Origin + Referer + Accept-Language headers — Origin/Referer were spoofing bambulab.com origin, which the new comment block specifically calls out as removed), makerworld.py (Firefox UA replaced; the Referer header is kept because MakerWorld's CSRF / origin-check middleware uses it on some endpoints, which is functional, not identity-faking), firmware_check.py (Chrome UA on the public wiki scraper replaced — wiki has no special handling for our UA). Separately: the /v1/iot-service/api/slicer/setting endpoint requires a version query parameter in Bambu Studio's XX.YY.ZZ.WW format (the API returns HTTP 400 "field 'version' is not set" without it, and HTTP 422 "Invalid input parameters" for non-matching formats like bambuddy-1.0), but Bambu's server accepts ANY value within that format — verified the same 576 KB response with version=99.99.99.99. The previous default "02.04.00.70" is an actual Bambu Studio release version (2.4.0.70). The default is now "1.0.0.0" (held in a new _SLICER_API_VERSION module constant in bambu_cloud.py and re-exported into routes/cloud.py so the two route defaults stay in sync), which satisfies the format requirement without claiming to be a specific Bambu Studio build. Unchanged on purpose: version="2.0.0.0" parameters in create_setting / update_setting payloads are the preset's format version (extracted from current.get("version", "2.0.0.0") for updates, line 443) — they describe the preset schema, not the client, and stay as-is. Two regression tests rewritten to lock in the new behavior: test_verify_totp_uses_honest_bambuddy_user_agent (was test_verify_totp_includes_browser_headers — asserts UA starts with Bambuddy/, asserts Mozilla/Chrome/Origin/Referer are not present) and test_sends_honest_bambuddy_user_agent (was test_sends_browser_like_headers — same shape, plus continues to assert the deprecated x-bbl-* Bambu-app identification headers are still gone). All 4598 backend tests pass.
  • Spoolman weight tracking now uses per-print grams for all spools, matching the internal Filament Inventory (#1119, reported by Moskito99) — Spoolman previously had two mutually-exclusive weight paths: AMS remain%×tray_weight auto-sync (default; only worked for Bambu Lab spools with valid RFID tray_weight) and per-print 3MF-grams tracking (only enabled when "Disable AMS Weight Sync" was toggled on). Non-BL spools without RFID fell through both paths — AMS auto-sync had no tray_weight to multiply, and the inventory_remaining fallback was wiped because activating Spoolman deletes the internal spool_assignment table — so Spoolman never saw a weight update for them. The internal Filament Inventory has no such gap: it always uses per-print 3MF grams as the primary path with AMS-remain% delta as fallback, and it works for every spool type. Spoolman now does the same: per-print tracking runs whenever Spoolman is enabled and is the only writer of remaining_weight. AMS auto-sync continues to maintain spool metadata and slot assignments but no longer touches weight (eliminating the double-count that would otherwise occur for BL spools with both paths active). store_print_data (spoolman_tracking.py:159) had its disable_weight_sync early-return removed; the three sync_ams_tray callsites (main.py:1450 auto-sync, spoolman.py:318 per-printer manual, spoolman.py:517 sync-all) now hard-code disable_weight_sync=True. The spoolman_disable_weight_sync setting is now deprecated and a no-op — kept in the DB/UI for backwards compat. Behavioral consequence for existing users on the default flag (False): live AMS-based remaining_weight updates between prints stop happening; weight updates now arrive once per print completion with 3MF gram precision. Regression test in test_spoolman_tracking.py::test_stores_tracking_when_disable_weight_sync_is_false proves the early-return is gone.

Added

  • Manual LDAP user provisioning from the UI (#1298, reported by Fuechslein) — Until now the only way to onboard an LDAP user was to leave Auto-provision on and have them log in once, because the create-user form had no LDAP awareness — admins who wanted to disable auto-provision had to hand-edit the database to create the row. The user-create modal now grows a Local / LDAP tab toggle (visible only when LDAP is enabled in settings, so non-LDAP installs see no UI change). The LDAP tab is a directory search: type ≥2 characters and the new GET /auth/ldap/search endpoint uses the service-account bind to query the directory with a fixed OR filter across sAMAccountName, uid, mail, displayName, and cn (covering both Active Directory and OpenLDAP layouts; user input is RFC-4515 escaped so a typed * doesn't enumerate the whole tree). Each result is annotated with already_provisioned so usernames that already exist as BamBuddy users render dimmed and disabled. Picking a result and clicking Provision user hits POST /auth/ldap/provision, which re-resolves the username via the service bind (rather than trusting the client payload) and calls the same _provision_ldap_user helper the auto-provision login path uses — so group mapping, default-group fallback, and email sync behave identically regardless of which path created the user. Distinct error responses cover the failure modes (400 LDAP disabled / query too short, 404 directory miss, 409 username already exists locally vs. already-provisioned LDAP user, 503 directory unreachable with the underlying ldap3 exception class + message in the detail field so the operator can diagnose without reading backend logs). Backend refactor extracts _open_service_connection + _extract_user_info helpers in backend/app/services/ldap_service.py so the new lookup_ldap_user and existing authenticate_ldap_user share the bind + attribute-extraction paths (POSIX memberUid + primary gidNumber + case-insensitive DN dedupe stay in one place). Two ldap3 schema-check workarounds for OpenLDAP installs (caught in user testing against an OpenLDAP directory): (1) the directory-search connection is opened with check_names=False because ldap3's client-side filter validation rejects the AD-only sAMAccountName/displayName names in the cross-schema OR filter before any packet is sent; (2) the search requests attributes=["*"] (all user attributes) rather than the explicit AD-flavoured name list, because ldap3's build_attribute_selection validates each named attribute against the server schema independently of check_names and only the * wildcard is in its hard-coded ATTRIBUTES_EXCLUDED_FROM_CHECK exclusion list — so a list like ["sAMAccountName", "uid", ...] still throws LDAPAttributeError on OpenLDAP. The login/lookup paths (authenticate_ldap_user, lookup_ldap_user) keep check_names=True so typos in the configured user_filter setting still fail loudly. New shared frontend component <LdapUserPicker> in frontend/src/components/LdapUserPicker.tsx handles the debounced search (300 ms, min 2 chars), result list, selection, and provision mutation; it's rendered from all four create-user modal paths — basic + advanced-auth in UsersPage.tsx, basic + advanced-auth in SettingsPage.tsx (the latter being the "Add User" inside Settings → Authentication, which uses a separate modal flow from the dedicated Users page) — and the shared CreateUserAdvancedAuthModal gains a ldapEnabled + onLdapProvisioned prop pair so both pages drive the same component. i18n: 14 new keys under users.modal.ldap* + users.modal.{localTab,ldapTab,tabsAriaLabel} + 1 toast key in frontend/src/i18n/locales/en.ts (other 7 locales fall back to English per project convention). The wiki at features/authentication.md was also corrected — the prior "When disabled, an admin must pre-create the user in BamBuddy" line was misleading (no UI path existed) and now describes the new search-and-provision flow. Regression tests: 14 unit tests in backend/tests/unit/services/test_ldap_service.py cover the filter shape, wildcard escaping, username-canonical fallbacks (sAMAccountName → uid → cn), bind-failure propagation, the no-password-bind contract of lookup_ldap_user, and pin both ldap3 schema-check workarounds (check_names=False on the search connection + attributes=["*"] so OpenLDAP doesn't reject the request). 12 integration tests in backend/tests/integration/test_ldap_provision.py cover auth gating, short-query rejection, LDAP-disabled rejection, already_provisioned annotation, the 4xx/5xx error matrix, and a happy-path provision that verifies auth_source=ldap, password_hash=None, and group-mapping inheritance from the auto-provision path. 5 frontend tests in LdapUserPicker.test.tsx cover the debounce, the search → select → provision flow, already-provisioned rows rendering disabled, and surfaced provision errors. 65 LDAP-related backend tests + 5 picker tests pass; full backend ruff clean; frontend build clean.
  • Slice modal: pick the build plate (#1337, reported by digitalskies) — Slicing a plain STL through the integrated slicer always defaulted to whatever curr_bed_type lived in the chosen process preset (typically Cool Plate), which the slicer CLI then rejected for high-temp filaments with Plate 1: Cool Plate does not support filament 1. The user had no way to switch plates short of cloning the process preset in BambuStudio, which defeats the point of the in-app slicer. The Slice modal now exposes a Build plate dropdown with the six canonical BambuStudio / OrcaSlicer plates (Cool Plate, Cool Plate SuperTack, Engineering Plate, High Temp Plate, Textured PEI Plate, Smooth PEI Plate) plus an explicit Auto (use process preset) option that preserves the previous behavior. The dropdown sits between Process profile and Filament rows so it stays visible regardless of how many filament slots the picked plate uses (a long filament list would otherwise push it off the modal's max-h-[85vh] scroll viewport) and is always enabled — including when the user picks a Printer Preset Bundle from the top BundlePicker. When the user picks a specific plate, the new bed_type field on SliceRequest (backend/app/schemas/slicer.py) flows through the dispatcher via two paths: (1) resolved-preset path — the route helper _patch_process_bed_type in backend/app/api/routes/library.py overwrites curr_bed_type on the resolved process JSON before forwarding to the sidecar (no preset cloning required); (2) bundle dispatch pathslice_with_bundle in backend/app/services/slicer_api.py adds a bedType form field to the sidecar multipart so the sidecar can pass --curr_bed_type through to the CLI, which lets the override take effect even though Bambuddy can't patch the bundle's process JSON locally (the sidecar materialises it from the stored .bbscfg). Sidecar versions that don't recognise the field silently no-op — the slice still runs, just with the bundle's default plate; the slicer-API fork at maziggy/orca-slicer-api will need the matching change for the bundle path to take full effect. i18n parity: 8 new keys (slice.bedType.{label,auto,coolPlate,coolPlateSuperTack,engineering,highTemp,texturedPEI,smoothPEI}) added to all 8 locales — full German translation, English fallbacks elsewhere per project convention. Regression tests: 4 in test_slice_request_bed_type.py (bed_type defaults to None, accepts the six canonical strings, rejects overlong input via the schema's max_length=64; _patch_process_bed_type overwrites an existing value, adds the field when missing, and returns the input unchanged for malformed JSON or non-dict roots), 4 in test_library_slice_api.py (resolved-preset path: with bed_type set, the sidecar receives "curr_bed_type": "Textured PEI Plate" in the presetProfile multipart part; without it, curr_bed_type stays out of the body entirely. bundle dispatch path: bedType form field carries the override through to the sidecar; omitting bed_type keeps the form field out of the request so the bundle's own curr_bed_type is preserved), 2 in SliceModal.test.tsx (dropdown selection puts bed_type on the request; leaving it on Auto omits the field). 59 backend slice tests + 34 SliceModal tests pass; build and i18n parity script clean.

Fixed

  • Plate-detection calibration captured the wrong camera when an external camera was configured (#1359, reported by Andlar94) — On the reporter's A1 with an external RTSP / go2rtc camera enabled, every print start raised "Build plate not empty" no matter how perfectly they calibrated. Root cause: the runtime auto-check at print start in backend/app/main.py:1819 called check_plate_empty(..., use_external=printer.external_camera_enabled, ...) — honouring the external camera setting. The manual UI check + calibration routes in backend/app/api/routes/camera.py declared use_external: bool = False, and the frontend client at frontend/src/api/client.ts always sent use_external=false explicitly (the UI call sites in PrintersPage.tsx never passed useExternal). So calibration captured a frame from the built-in chamber camera and saved it as the reference; the runtime auto-check captured a frame from the external camera and diffed it against that built-in reference — a permanent difference well above any sane threshold, hence "not empty" on every print. Fix: the two routes now use use_external: bool | None = None, and after the printer row is loaded they derive the default as bool(printer.external_camera_enabled and printer.external_camera_url and printer.external_camera_type) — identical to the runtime path's logic and the service-layer gate at plate_detection.py:605. Centralising the default on the backend means any current or future caller automatically gets the right camera without having to remember the flag. The frontend client now only forwards use_external when the caller explicitly sets it (default omitted → backend decides), so the existing UI buttons immediately benefit. Power-user override path stays open: passing ?use_external=false on a printer with an external camera still wins, so anyone who deliberately wants a built-in-camera reference can still get one. Regression tests in backend/tests/integration/test_camera_api.py: test_check_plate_defaults_use_external_when_external_camera_enabled and test_calibrate_plate_defaults_use_external_when_external_camera_enabled pin the new default for a printer with external camera + URL + type set; test_check_plate_defaults_use_external_false_when_external_camera_disabled pins the built-in default for the no-external-camera case (the common path stays untouched); test_calibrate_plate_explicit_use_external_false_overrides_default pins the explicit-override escape hatch. All 11 plate-tagged camera integration tests pass; ruff clean; frontend build clean.
  • API Keys page now exposes a narrowly-scoped "Update electricity price" toggle so the Home Assistant dynamic-tariff integration actually works (#1356, reported by maziggy) — The reporter followed the Energy Tracking wiki page literally — "create a key with Write Settings permission, then PATCH /api/v1/settings with {energy_cost_per_kwh: ...}" — and hit {"detail":"API keys cannot be used for administrative operations"}. Triage showed three independent drifts: (1) the wiki listed nine fictional permissions ("Read Printers / Write Settings / Admin / …") but the actual UI in SettingsPage.tsx:3683-3744 only ever exposed four toggles (Read Status, Manage Queue, Control Printer, Allow Cloud Access). There was no Write Settings toggle to tick. (2) Even if the UI had exposed it, the backend hard-denies Permission.SETTINGS_UPDATE for every API key via _APIKEY_DENIED_PERMISSIONS in backend/app/core/auth.py — intentional protection because PATCH /settings can rewrite SMTP/LDAP/MQTT credentials and the HA access token, which would silently widen attack surface beyond what any documented use case needs. (3) So the wiki had been promising a workflow that was never deliverable. Fix: introduce a narrowly-scoped door for exactly the documented use case rather than relaxing the deny list. New column can_update_energy_cost BOOLEAN DEFAULT FALSE on api_keys (backend/app/models/api_key.py) with idempotent migration in backend/app/core/database.py — defaults FALSE so existing keys never silently gain settings-write capability on upgrade. New endpoint POST /api/v1/settings/electricity-price in backend/app/api/routes/settings.py accepts {"energy_cost_per_kwh": <float ≥ 0>} — the field name matches what the wiki already documented so the HA rest_command example needs only a URL+method change, not a payload change. New custom dependency require_energy_cost_update() in backend/app/core/auth.py bypasses the _APIKEY_DENIED_PERMISSIONS check for this one route for API keys with can_update_energy_cost=True; JWT users still go through the standard SETTINGS_UPDATE permission check; auth-disabled deployments allow it (matches other settings routes). Crucially, the general PATCH /settings route remains denied for API keys — flipping the narrow flag does NOT widen general settings-write access (regression test pins this). Schema/route wiring in backend/app/schemas/api_key.py + backend/app/api/routes/api_keys.py accepts and returns the new field on create/update/list. Frontend: fifth toggle "Update electricity price" added to the create-API-key card in SettingsPage.tsx with an amber "Energy" badge on existing keys that have it set; APIKey / APIKeyCreate / APIKeyUpdate types in api/client.ts gained the new field; 16 new i18n keys (updateEnergyCost, updateEnergyCostDescription, energyCostBadge) added to all 8 locales — full German translation, English fallbacks elsewhere per project convention. Wiki rewrites: features/api-keys.md — replaced the fictional 9-row permissions table with the actual 5 toggles plus an info box explaining why no general Write Settings / Admin exists. features/energy.md — Home Assistant section now points at POST /api/v1/settings/electricity-price, instructs users to tick the new permission, and adds a deprecation warning for users who built the integration from the old (broken) PATCH /settings example. Tests: backend/tests/integration/test_settings_electricity_price.py — 8 tests covering create-with-flag, default-off, API-key-with-flag updates persist, API-key-without-flag → 403, JWT admin user with SETTINGS_UPDATE allowed, anon → 401, negative price → 422 (Pydantic ge=0), and the critical regression test test_patch_settings_still_denied_with_energy_flag that pins the narrow-flag-doesn't-widen-PATCH contract. frontend/src/__tests__/pages/SettingsPage.test.tsx — 2 new tests: Energy badge renders for keys with the flag, the toggle's value flows through to the POST body when the box is ticked. All 8 new backend tests + 32/32 SettingsPage tests pass; ruff clean; i18n parity passes; frontend build clean.
  • Layer timelapse now starts for queue/VP-dispatched prints (#1353, reported by Andlar94) — Reporter's external camera + go2rtc setup was configured correctly (Obico was happily polling the snapshot URL for ML plate detection) but no MP4 was ever produced. Logs showed [LAYER-TL] Stitching layer timelapse for printer 1 after each print yet no frames were ever captured and no [LAYER-TL] Attaching timelapse... follow-up appeared. Root cause: layer_timelapse.start_session() was only called from the two new-archive paths in on_print_start (backend/app/main.py:2510 fallback path and 2600 regular new-archive). The expected-archive branch at main.py:1981-2052 — where every reprint and every queue/VP-dispatched print lands — updated the existing archive's status to printing but never started a timelapse session. So _background_layer_timelapse ran at print-complete time, called tl_complete(printer_id), found no active session in _active_sessions, silently returned None, and the wrapper at main.py:3917 produced no log message for the no-session case. Every print that came through the queue (or any reprint) silently lost its timelapse. Fix: mirror the same if printer.external_camera_enabled and printer.external_camera_url: start_session(...) call in the expected-archive branch right after _active_prints registration. The two pre-existing paths are untouched. Help-text correction: the snapshot URL field's tooltip previously read "Single-frame URL used for notification thumbnails, finish photos, timelapse and plate detection" — which is technically true but read as if filling in the URL was sufficient to enable those features. Reworded across all 8 locales to "Timelapse and plate detection each require their own per-printer toggle — this URL is just the image source they pull from when active" so admins know they still need to enable plate detection per-printer (separate toggle) and that timelapse only fires while a print is running. Regression tests in backend/tests/unit/test_layer_timelapse_expected_archive.py: test_expected_archive_path_starts_timelapse_when_external_camera_enabled exercises the full on_print_start flow with a registered expected print + external_camera_enabled=True and asserts start_session is called with the expected-print archive_id (not a freshly created one); test_expected_archive_path_skips_timelapse_when_external_camera_disabled keeps the existing gate in place so we don't try to capture from a None URL. 2 new tests pass; ruff clean; frontend i18n parity passes; bundle builds.
  • Assign Spool now configures the slot even after a "Reset Slot" on A1 Mini BMCU / P1S Standard AMS (#1322 follow-up, reported by RosdasHH) — The original fix widened empty-slot detection to state == 11 OR tray_type != "", which closed the configured-slot reconfig case (PETG-over-PLA) but didn't help the "Reset Slot on printer screen with spool still inserted" flow: on these firmwares the AMS reports state=3, tray_type="" after a Reset Slot regardless of whether a spool is physically loaded. The empty-detection therefore decided "empty", skipped the MQTT publish, marked the assignment pending, and waited for on_ams_change to re-fire when the AMS transitioned to "loaded" — but the AMS never transitioned, because nothing was changing physically. A deadlock with no escape from user actions. Reporter pinned it by removing the if not slot_is_empty: gate at backend/app/api/routes/inventory.py:1302 and verified the firmware accepts the MQTT push when a spool is present, even with state=3, tray_type="". The original guard's rationale — "Bambu firmware silently drops ams_filament_setting / extrusion_cali_sel for unloaded slots" — turned out to be over-cautious: it's load-bearing only for slots that the firmware itself explicitly marks empty via state == 9 ("no spool") or state == 10 ("spool present but no feed"). For ambiguous states (state=3 default-idle, missing-state on older firmwares), the AMS doesn't give us a reliable signal at all, so the safest bet is to treat the user's explicit Assign click as their assertion that a spool is there and let the firmware decide what to do with the push. Fix: the empty-detection now only short-circuits on state ∈ {9, 10} — every other state attempts MQTT. pending_config is now driven by either the explicit-empty signal OR not configured (so a printer-offline / no-client publish failure still flags the assignment as awaiting follow-up). The on_ams_change replay logic at backend/app/main.py:1031 is unchanged and still serves as the safety net for state=9/10 slots whose spools get inserted later (and for any truly-empty slot the firmware dropped — DB fingerprint_type stays empty until an AMS push actually provides one, so the replay still fires). Trade-off: for the rare case of "assign to a slot that really IS empty + state=3", the badge will show "Configured" even though firmware silently dropped the push. Most users assign right after inserting, so this is a small UI honesty cost in exchange for unblocking the much more common Reset-Slot workflow. Follow-up optimization (also RosdasHH): the reporter then traced the raw MQTT payload and found that P1S / A1 Mini send only {"id": N} for a genuinely-empty slot — no state, no tray_type, no other fields. Without that signal, the assign path was firing one wasted MQTT publish per click on a truly-empty slot (firmware dropped it silently, but still). The AMS parser at backend/app/services/printer_manager.py:788 now detects the bare-tray shape (len(tray) == 1 and "id" in tray and state is None) and promotes it to state=9 — the firmware's explicit "no spool" code — which lets the inventory route's existing state ∈ {9, 10} short-circuit apply. The detection is intentionally narrow: the post-Reset-Slot A1 Mini BMCU case sends a populated payload with empty values (state=3, tray_type=""), which has more than one key and stays unaffected — so the #1322 root fix is preserved. Regression tests in backend/tests/integration/test_inventory_assign.py: test_post_reset_slot_with_state_3_still_fires_mqtt (renamed from the previous "marks_pending" test which was pinning the bug) and test_state_missing_with_empty_tray_type_still_fires_mqtt (inverted from the legacy "older firmware empty → pending" assertion) pin the new behavior on the two firmware shapes the reporter hit. test_empty_tray_type_without_state_still_fires_mqtt covers the no-state SpoolBuddy case. test_no_ams_data_with_no_client_marks_pending keeps the printer-offline path producing pending_config=True so on_ams_change replay still triggers. test_state_empty_skips_mqtt_and_marks_pending (state=9) is unchanged — the firmware's explicit "no spool" still short-circuits correctly. The recent dd3e3f80 k-profile fix was a separate red-herring path the reporter happened to also hit during testing; it stays as-is. All 28 inventory-assign tests + 312 inventory-tagged tests pass; ruff clean. Bare-tray follow-up tests: test_bare_tray_emulates_state_9 and test_populated_payload_with_empty_state_3_is_not_promoted in backend/tests/unit/services/test_printer_manager.py — the second one is the explicit guard against regressing the #1322 root case by accident.
  • Firmware update dialog now survives Cloudflare-blocked or transient outages on bambulab.com (#1350, reported by K1ngJony) — User's X1C on 01.10.00.00 saw "01.11.02.00 newer · Unavailable" plus the error "Firmware file for 01.11.02.00 is not available from Bambu Lab", and the logs showed repeated Failed to get Bambu Lab page: 403 warnings. Two problems stacked: (1) https://bambulab.com/en/support/firmware-download/all (the page Bambuddy scrapes to extract the Next.js buildId used to fetch per-model JSON with download URLs) was returning 403 from the reporter's network — Cloudflare bot protection on bambulab.com is stricter than on the wiki and, prior to the 2026-05-12 compliance audit, the firmware-check service still claimed to be Chrome 120 via UA spoofing. The UA was updated to honest Bambuddy/1.0 in that audit but Accept / Accept-Language headers were never sent, so the request still tripped the "bare Python client" signal. (2) The buildId was cached in-memory only (1 h TTL), so every backend restart forced a fresh page fetch — meaning the first 403 from the user's network permanently broke download-URL resolution for that session even though the previous run had a perfectly valid buildId. Fix in backend/app/services/firmware_check.py: (a) the httpx client now sends Accept: text/html,application/json,*/*;q=0.8 and Accept-Language: en-US,en;q=0.9 alongside the existing honest Bambuddy/1.0 UA — both headers any normal client sends, no impersonation. (b) _get_build_id() gained a disk-cache layer at <data_dir>/firmware/build_id.json: successful fetches persist {build_id, fetched_at} to disk; the in-memory cache (fresh path, 1 h TTL) is checked first, then the disk cache seeds the in-memory slot on cold start, then the live fetch tries to refresh. On 403 or network error, we keep the cached buildId and set a new download_page_unreachable flag so callers can render an honest error. (c) _fetch_all_versions_from_download_page now retries once when a cached buildId returns 404 (Bambu rebuilt the page → invalidate + refetch + retry); on 403 it sets the unreachable flag and gives up gracefully without churning. Better error message in backend/app/services/firmware_update.py: when a wiki-listed version has no download URL because download_page_unreachable is true, the dialog now says "Could not reach Bambu Lab's firmware download page to fetch the file URL for X. Version is listed on the Bambu wiki but the download endpoint is unreachable from this network. Try again later, or download the firmware manually from bambulab.com and copy it to the printer's SD card." instead of the misleading "Firmware file for X is not available from Bambu Lab" (which implied Bambu didn't have the file, when actually we just couldn't reach Bambu). Version genuinely not in the catalog still gets the original message. Regression tests in backend/tests/unit/test_firmware_versions.py: test_client_headers_identify_honestly_and_send_browser_accept pins UA + Accept headers, test_build_id_is_persisted_to_disk confirms the disk write on success, test_build_id_falls_back_to_disk_on_403 reproduces the reporter's 403 with a pre-seeded disk cache, test_download_page_unreachable_flag_set_on_403_json covers the per-model JSON endpoint 403 path, test_download_page_retries_once_when_buildid_stale proves the 404 retry. All 12 firmware tests + ruff clean.
  • Subtype dropdown on the Add/Edit Spool form now offers CF (carbon fiber) and GF (glass fiber) (#1345, reported by maziggy) — The Subtype dropdown in frontend/src/components/spool-form/FilamentSection.tsx is populated from the KNOWN_VARIANTS array in frontend/src/components/spool-form/constants.ts. CF and GF were missing, so a user adding a third-party PETG-CF spool via the Material=PETG + Subtype=CF flow (the same shape Bambu's "PETG HF" already used) couldn't find the subtype in the list and had to type it freehand into the "create new" tail. Added both — CF to match PETG-CF / PLA-CF / ASA-CF / PA-CF, and GF as the natural pair for ABS-GF / PA6-GF. parsePresetName in spool-form/utils.ts is unaffected: its materials list is iterated longest-first, so a cloud preset like Bambu PETG-CF Black still resolves to material=PETG-CF with empty afterMaterial (the variant loop runs on "" and finds nothing — no accidental Material=PETG / Subtype=CF rewrite). Frontend build clean.
  • Spool-assignment dialog stacks correctly: the material-mismatch confirmation appears above its parent, and dashboard filament hover popovers no longer get covered by sibling printer cards (#1336 follow-up, mismatch case reported by RosdasHH) — Two stacking-context regressions surfaced after the original z-50 → z-[100] bump on AssignSpoolModal landed. (1) Material-mismatch ConfirmModal hidden behind its parent. Assigning a spool with a different material to the one configured on the slot opens a yellow warning ConfirmModal from inside AssignSpoolModal. ConfirmModal's overlay was hardcoded to z-50 in its wrapper at frontend/src/components/ConfirmModal.tsx, so once the parent moved to z-[100] the child sat behind it — the user clicked Assign, saw the parent dim slightly, and nothing visible to confirm. Added an optional overlayZIndex?: string prop to ConfirmModal (defaults to z-50 so all 82 other call sites are untouched), and the mismatch site at AssignSpoolModal.tsx:584 passes overlayZIndex="z-[110]" so the warning sits above its parent. (2) FilamentHoverCard / EmptySlotHoverCard covered by neighbouring printer cards. Hovering an AMS slot on the dashboard opens a "Jade White · Bambu PETG HF · K Factor 0.024 · 87% · Open in Inventory / Configure" popover. The popover was using position: absolute with z-[60] inside its trigger — but each printer card on the dashboard creates its own stacking context (any filter: drop-shadow / transform / positioned-with-z descendant is enough), and z-index does not cross stacking-context boundaries: the next card in DOM order always wins regardless of how high the inner z-index goes. Visible as a "Jade White" tooltip getting half-eaten by the AMS-C tile column on the right neighbour card. Fixed by portaling both hover cards to document.body (FilamentHoverCard.tsx via createPortal from react-dom) with position: fixed and screen-space coordinates computed from triggerRef.current.getBoundingClientRect(). Coords are recomputed on visibility change, on scroll (capture phase) and on resize so the popover tracks the trigger when the viewport moves; a requestAnimationFrame re-measure after the initial paint avoids a one-frame flicker before the card has its rendered dimensions. Hover handlers wired on both the trigger AND the portaled card so moving the cursor from the slot tile onto the popover doesn't auto-dismiss it after 100 ms. The smart top/bottom placement logic (flips to below the trigger when there's not enough headroom above the fixed 56 px header) is preserved, as is the arrow pointer that points back at the slot. z-[60] stays — but it's now global because the popover lives at the root of the DOM, so it always beats dashboard widgets without conflicting with full-screen modals at z-[100]. All 20 FilamentHoverCard, 17 ConfirmModal, and 13 AssignSpoolModal tests pass; frontend build clean.
  • Deleting a print archive no longer wipes its filament / time / cost / energy contribution from Quick Stats (#1343, reported by IndividualGhost1905) — Running the same model ten times and then deleting nine of the resulting archive entries (to keep the file list tidy) silently rewound the totals on the Statistics page: total_prints, total_filament_grams, total_cost, and per-print energy all dropped back to whatever the surviving archive contributed, as if the other nine prints had never happened. Root cause: every metric in get_archive_stats at backend/app/api/routes/archives.py is recomputed on each render via COUNT / SUM over the live PrintArchive rows, so removing a row removes its contribution. (Energy in the default "Total" mode already survived archive deletion because it reads the smart-plug lifetime counters via _sum_live_plug_totals — that's the architectural shape we now generalise to the rest of the metrics.) Fix: soft delete with opt-in hard purge. New nullable deleted_at column on print_archives (backend/app/models/archive.py) tracks rows the user removed from the UI. The DELETE endpoint at backend/app/api/routes/archives.py now accepts ?purge_stats=true; default behaviour is to soft-delete — files removed from disk (still frees the storage), row hidden from listings, but the row stays in the table so the stats endpoint keeps counting it. Setting ?purge_stats=true falls back to the original hard-delete path for the rare case where the user actually wants the row out of Quick Stats too (e.g. failed prints that shouldn't pollute success-rate dashboards). The migration in backend/app/core/database.py adds the column dialect-conditionally — DATETIME on SQLite, TIMESTAMP on PostgreSQL (PG doesn't accept DATETIME on ALTER TABLE the way it tolerates it inside CREATE TABLE) — plus an index on deleted_at so the WHERE deleted_at IS NULL filter that's now sprinkled across the listing queries stays cheap on big archive tables. Service-layer changes. ArchiveService.soft_delete_archive is a new sibling of delete_archive that reuses the existing on-disk path-safety checks (extracted into _resolve_archive_dir_for_delete so soft and hard delete share the resolution rules — refuses paths outside archive_dir, refuses depth-zero paths) and flips deleted_at = now() after shutil.rmtree. Listing methods now filter PrintArchive.deleted_at.is_(None): ArchiveService.list_archives, get_duplicate_hashes_and_names (a soft-deleted dupe must not inflate a group's count so the UI shows "1 of 1" instead of "1 of 10"), find_duplicates (both the exact-hash and the print-name paths), and ArchiveComparisonService.find_similar_archives (both name-match and content-hash paths so the "Similar archives" panel doesn't suggest something the user just removed). The stats endpoint deliberately keeps NO filter — the whole point of #1343. Route-level reads tightened too: GET /archives/{id} returns 404 on soft-deleted rows so stale bookmarks don't expose hidden archives, search (both the SQLite FTS5 path and the LIKE fallback) skips them, the duplicate-group enrichment query in list_archives filters them, and tag listing / archives-by-tag exclude them. GET /archives/slim and GET /archives/stats/export intentionally do NOT filter so the dashboard widgets in StatsPage.tsx keep aggregating across the full history. Frontend. ConfirmModal gained an optional children slot (frontend/src/components/ConfirmModal.tsx) so the delete-confirmation dialog can render an opt-in checkbox between the message and the action buttons without forcing a new bespoke modal. frontend/src/pages/ArchivesPage.tsx — both the card view and the detail view — now own a deletePurgeStats boolean per component instance and pass it through to api.deleteArchive(id, purgeStats) (frontend/src/api/client.ts appends ?purge_stats=true only when the box is ticked). The checkbox resets to off on every modal close so the destructive option is opt-in per delete, never sticky. i18n: one new key archives.modal.deletePurgeStats added across all 8 locales — full German translation, English fallbacks elsewhere per project convention. Tests added to backend/tests/integration/test_archives_api.py: soft delete preserves the row's contribution to total prints / filament / cost (the regression test for the reporter's exact scenario), ?purge_stats=true drops it from Quick Stats as before, soft-deleted archives 404 on GET /archives/{id}, soft-deleted archives are skipped by the search endpoint. All 42 pre-existing archive integration tests stay green, including test_delete_archive (which already asserts post-delete 404 — semantically equivalent under soft delete). Frontend ConfirmModal (17 tests) and ArchivesPage (23 tests) suites green, full build clean.
  • OIDC provider login icons now render again — the strict SPA CSP no longer breaks them (#1333, PR #1342 by netscout2001) — When an admin configured an OIDC provider with an external icon_url (e.g. https://google.com/icon.png), the login page showed the browser's broken-image glyph instead of the IdP logo. Root cause: the SPA ships with the strict policy img-src 'self' data: blob: so the entire admin UI cannot hot-link arbitrary external image hosts; admin-supplied icon URLs hit that wall on every render. Two options were on the table — loosen img-src to allow https: (one-line change but degrades the SPA's CSP everywhere), or proxy the bytes through the backend (this PR). The proxy path was chosen because (a) the SPA's img-src policy stays strict app-wide; (b) the existing MakerWorld thumbnail endpoint at backend/app/services/makerworld.py already established the pattern with the same rationale; (c) anonymous login-page renders no longer leak each visitor's IP to the IdP host as a tracking signal — the proxy fetches the bytes once at admin-configure time and serves them from the same origin afterwards. Backend. New MakerWorld-style fetcher in backend/app/services/oidc_icon.py streams the response with follow_redirects=False (so the SSRF host allowlist can't be bypassed via a 302 to a private address), enforces a MIME whitelist (PNG/JPEG/WebP/GIF; SVG is intentionally omitted — XML payloads carry too many xlink:href / external-ref corner cases for an MVP), and aborts at the first chunk past 1 MB so a hostile or misconfigured IdP serving a 500 MB payload cannot OOM the server. SSRF guard assert_safe_public_https_url in backend/app/api/routes/_oidc_helpers.py is stricter than the Spoolman variant — Spoolman deliberately allows loopback / RFC-1918 (same-LAN deployment is the standard topology) while OIDC icons must live on the public internet, so private addresses there are SSRF probes. The shared SSRF data (cloud-metadata IP set covering AWS/GCP/Azure/Oracle/DO/Alibaba, numeric-encoded-IP regex, IPv4-mapped-IPv6 unwrap) was extracted to backend/app/api/routes/_url_safety.py so the two top-level guards share data but keep their distinct policies. The Pydantic _validate_icon_url in backend/app/schemas/auth.py now lazy-imports the runtime SSRF guard so schema validation and the fetcher enforce the same allowlist — no drift between layers. Storage. Three new columns on oidc_providers (backend/app/models/oidc_provider.py): icon_data (LargeBinary, deferred=True so list queries don't pull the BLOB on every login-page render), icon_content_type (String(20), also serves as the has-icon indicator so the check never accidentally lazy-loads the BLOB), icon_etag (SHA-256 hex). A DB-layer CheckConstraint enforces the all-or-nothing triplet ((icon_data IS NULL) = (icon_content_type IS NULL) = (icon_etag IS NULL)) — fresh installs (SQLite + PostgreSQL) get it via metadata.create_all, stale PostgreSQL installs get it via ALTER TABLE ADD CONSTRAINT in backend/app/core/database.py (SQLite cannot ADD CONSTRAINT on an existing table, same trade-off as the existing default_group_id FK). The migration's ALTER TABLE is dialect-conditional — BLOB on SQLite, BYTEA on PostgreSQL. Routes. Four endpoints in backend/app/api/routes/mfa.py: GET /oidc/providers/{id}/icon is public (no auth, same rationale as /api/v1/makerworld/thumbnail<img> tags can't send Authorization headers, and the icon renders before the user is signed in), serves cached bytes with a strong ETag and Cache-Control: public, max-age=3600, supports If-None-Match including the W/ weak prefix, the * wildcard, and multi-token comma lists per RFC 7232. DELETE /oidc/providers/{id}/icon clears all four icon columns (URL + the three cached-bytes columns) — "Remove icon" means the whole record is gone, not just the cache, so the admin form doesn't end up in a confusing half-state where it shows a stale URL while the login page renders the Shield fallback. POST /oidc/providers/{id}/icon/refresh re-fetches from the stored URL for the "Refresh" button. Disabled providers respond 404 on the GET endpoint to avoid leaking their existence to anonymous callers. POST / PUT integrate the fetcher transactionally: a failed fetch aborts with 400 before commit, so a bad URL on create leaves no half-configured row in the DB and a bad URL on update leaves the previous cached bytes intact. PUT with explicit icon_url: null clears the icon record (detected via Pydantic's model_fields_set — distinct from "field omitted" which preserves it). Both fetch failures and SSRF rejections log at WARNING with the URL redacted (query string and fragment stripped via _redact_url_for_log) so admin-supplied presigned URLs carrying X-Amz-Signature=... or bearer tokens can't end up in operator log files. Frontend. frontend/src/pages/LoginPage.tsx extracts an OIDCProviderButton sub-component so each provider owns its own iconFailed state — on <img> error (provider deleted between page load and image fetch, network blip, etc.) the SPA swaps in the Shield fallback rather than showing the broken-image glyph to anonymous users. frontend/src/components/OIDCProviderSettings.tsx does the same with ProviderIconAvatar (Globe fallback) and adds Refresh / Remove buttons. The new same-origin proxy URL helper api.oidcProviderIconUrl(id) returns a SameOriginUrl-branded string so a future caller can't accidentally substitute an attacker-controlled URL where this is consumed. Four new i18n keys (refreshIcon, removeIcon, iconRefreshed, iconRemoved, iconFetchFailed) added across all 8 locales. Tests. About 100 new tests covering the streaming fetcher (MIME whitelist, status codes, redirect rejection, size-cap early-exit including the first-oversized-chunk guarantee, missing Content-Type distinct message, httpx.InvalidURL mapping), the OIDC SSRF guard (explicitly asserts that Spoolman-allowed cases like loopback / RFC-1918 / localhost are rejected here so the two guards do not silently converge), Pydantic-validator parity (numeric-encoded IPs, cloud metadata, multicast, IPv4-mapped IPv6 all rejected at schema-validate time), the dialect-conditional ALTER TABLE migration (both BLOB and BYTEA paths via patched is_sqlite()), the full create/update/delete/refresh flow including atomicity (failed fetch preserves prior state), the upgrade-path edge case (icon_url present but no cached bytes → refetch on next save), ETag/304 with W/ weak prefix and * wildcard, raw-SQL inconsistent-triplet 404 defence, the PG→SQLite-ZIP backup BLOB type-mapping round-trip, and a CSP regression-guard test in backend/tests/integration/test_security_headers.py that asserts the SPA default CSP block does not include https: in img-src — so a future contributor "fixing" a broken icon by relaxing CSP discovers the proxy pattern instead. Frontend tests in LoginPage.test.tsx and OIDCProviderSettings.test.tsx cover has_icon: true|false, mixed providers on the same page, <img> error → Shield/Globe fallback, and per-provider state isolation (two has_icon: true providers; firing error on A leaves B's icon intact — locks in the sub-component extraction so a future hoist of useState to the parent loop is caught by CI). Manually verified end-to-end against a live PocketID instance with multiple icon URLs. Follow-on tightening: has_icon is now a required field on OIDCProviderResponse (no Pydantic default — fails loudly if any future caller skips _build_provider_response), backed by an OIDCProvider.has_icon property reading icon_content_type. In update_oidc_provider the icon refetch was moved BEFORE the setattr loop, so on fetch failure the in-memory ORM object stays consistent (DB row was already safe via get_db()'s rollback; this closes the in-memory window too). Patched by netscout2001.
  • Backup tab indicator dot now turns green when Scheduled (local) Backups is enabled (#1331, PR #1338 by chanakyan-arivumani) — Toggling Scheduled Backups on inside Settings → Backup left the sidebar tab indicator dot stuck on grey: the visual cue that there's an active backup configuration was lost for users who run scheduled local backups without GitHub. Two stacked layers caused it: (1) the dot condition at SettingsPage.tsx:1461 only checked the GitHub chain (cloudAuthStatus?.is_authenticated && githubBackupStatus?.configured && githubBackupStatus?.enabled); settings?.local_backup_enabled was never consulted, so the scheduled-backup state had no path to the indicator. (2) The toggle handler in GitHubBackupSettings.tsx called api.updateSettings({ local_backup_enabled }) but never invalidated the ['settings'] query cache, so SettingsPage kept reading the stale value — the indicator would only update on a full page reload even if the condition fix were in place. Two-line fix: extend the dot's predicate to ... || settings?.local_backup_enabled and add queryClient.invalidateQueries({ queryKey: ['settings'] }) after a successful save (matching the existing invalidation pattern at GitHubBackupSettings.tsx:402/463/477/497). The GitHub-chain short-circuits first so the common case is unchanged. Patched by chanakyan-arivumani.
  • Color catalog presets now apply extra_colors (gradient stops) and effect_type (sparkle / wood / marble / glow / matte) onto the spool, not just hex + name (#1340, reported by maugsburger) — Creating a catalog entry that pairs a base color with multi-color gradient stops and a visual effect, then clicking that swatch in the Edit Spool dialog, only copied color_name and rgba over — the extra_colors and effect_type fields were silently dropped. The data was flowing from the backend correctly (GET /api/v1/inventory/color-catalog returns both fields per the ColorCatalogEntry schema in frontend/src/api/client.ts), but three layers above stripped them: (1) SpoolFormModal.tsx typed its colorCatalog state with a narrower shape that omitted the two fields; (2) ColorSection.tsx mapped catalog entries to CatalogDisplayColor (the typed-down shape rendered on swatches) without propagating them; (3) the selectColor() handler only set rgba + color_name on click. Fix: widened both types in spool-form/types.ts (CatalogDisplayColor + ColorSectionProps.catalogColors) to carry the optional extra_colors and effect_type, propagated them through the four matchedCatalogColors mapping callbacks (byBrand / exact full-material / normalized-trailing-+ / base-material prefix), and extended selectColor to take optional extraColors / effectType parameters. Semantic rule: catalog swatches are complete presets — picking one writes BOTH gradient and effect from the entry (overwriting any existing values), so a gradient catalog entry applies its stops AND a solid catalog entry clears any old gradient that was on the spool. Recent-colors and the hardcoded-fallback palette are plain hex pickers — picking one keeps any existing extra_colors / effect_type untouched, since those swatches aren't presets, just color picks. Bonus: fixed the en-US spelling drift the reporter flagged in their nitpick — 'Extra colours' and 'wrong colour loaded' strings (which had been seeded into all 8 locale files as English fallbacks) standardized to 'Extra colors' and 'wrong color loaded'; matching comment blocks (// Multi-colour ...) normalized in the same pass. Regression tests in __tests__/components/spool-form/ColorSectionCatalogExtras.test.tsx (3 cases): catalog click with gradient + effect propagates all four fields to updateField, catalog click on a solid preset clears any pre-existing extras/effect (preset-replaces-look semantic), and fallback palette click leaves extras/effect untouched. All 23 spool-form tests + 8 i18n parity tests pass; build clean.
  • Assigning a spool to an unconfigured AMS slot no longer silently skips MQTT on A1 Mini / P1S firmware — and the "PETG over a PLA-configured slot won't reconfigure" symptom is fixed in the same change (#1322, reported by RosdasHH) — On the user's A1 Mini BMCU (firmware 01.07.02.00) and P1S Standard AMS (firmware 00.00.06.75), pressing "Assign Spool" on any slot left the slot unconfigured: the DB row was created with pending_config=True, the MQTT publish was skipped, and the log line Pre-configured assignment: ... (slot empty, will configure on insert) fired even though the spool was physically loaded. The same code path also blocked the "swap PLA to PETG in the same slot" flow — Bambuddy would keep treating the spool as PLA because the publish never reached the printer. Root cause: the empty-slot detection at backend/app/api/routes/inventory.py:1267 preferred tray.state == 11 ("filament fed to extruder") over tray_type, falling back to tray_type only when state was missing entirely. Reporter's AMS dumps showed state == 3 on every slot — configured and unconfigured, on both printers — and state was never absent. So the state-only branch always fired, the result was always "empty", and MQTT was always skipped regardless of whether the slot was actually loaded. The "fingerprint_type empty → defer until insert" pre-config replay at backend/app/main.py:1026 had the same cur_state == 11 gate, so even when the user manually configured the slot in Bambu Studio afterward (making tray_type go from "" to "PLA"), the deferred MQTT publish never fired because state stayed at 3. Fix: both call sites now use a disjunction — the slot is treated as loaded when either state == 11 or tray_type is non-empty. The "Reset slot" case (state=11 + tray_type="") that the original state-only check was protecting still works through the first clause; the configured-slot case (state=3 + tray_type="PLA") on firmwares that never set state=11 now works through the second; and truly empty unconfigured slots (state≠11 + tray_type="") still fall through to the pending-config path correctly. The on_ams_change replay's disjunction also fires the deferred publish when the user later configures the slot through Bambu Studio, since that flips tray_type non-empty even if state stays at 3. Caveat: for a truly empty slot with a 3rd-party non-RFID spool that the user physically inserted, neither signal points to "loaded" on these firmwares, so we still can't auto-fire the publish until the slot gets configured (manually or by another assign). The pending-config row persists in the DB and gets applied on the next AMS push that flips tray_type non-empty. Regression tests: 3 in test_inventory_assign.pytest_state_never_eleven_firmware_with_loaded_tray_fires_mqtt (state=3 + tray_type='PLA' → MQTT fires; pins the reporter's primary symptom and the PETG-over-PLA secondary symptom which goes through the same predicate), test_state_never_eleven_firmware_with_empty_tray_marks_pending (state=3 + tray_type='' still pending — confirms the disjunction didn't accidentally turn truly empty slots into the loaded branch), and test_on_ams_change_fires_replay_when_tray_type_appears_without_state_11 (pre-existing SpoolBuddy-style assignment with empty fingerprint; tray_type going ''→'PLA' on a state=3 firmware fires the deferred publish even though state never becomes 11). All 28 tests in the file pass; ruff clean.
  • Assign Spool / Inventory search: numeric spool ID lookup is back, and Unassign in Spoolman mode no longer stays permanently disabled (#1336, reported by S0liter) — Two independent regressions surfaced from the same report. (1) Numeric ID search: typing a Spoolman spool's numeric ID into the search box on the "Assign Spool" dialog (or on the Inventory page) returned no results. The shared search helper spoolMatchesQuery at frontend/src/utils/inventorySearch.ts:7 only checked the text fields (material, brand, color_name, subtype, note, slicer_filament_name, storage_location) — the spool's id was not part of the predicate, so a query like 12 only matched when "12" happened to be a substring of one of the text fields. One-line fix: the predicate now also tests String(spool.id).includes(q), mirroring the case-insensitive substring semantics of the other fields. Covers both call sites: the Assign Spool dialog (AssignSpoolModal.tsx:255 for local inventory + :446 for Spoolman) and the main Inventory page (InventoryPage.tsx:871). New regression test in __tests__/utils/inventorySearch.test.ts pins exact-match ('42' → id 42), substring ('4' → id 42), and non-match ('99' → id 42 rejected) so the predicate can't drift back into "text only" silently. (2) Unassign button stuck disabled in Spoolman mode: opening the edit modal on a Spoolman spool that was assigned to an AMS slot left the Unassign button greyed out — the user had no way to release the spool back to "available". The modal at SpoolFormModal.tsx:526 only ever queried api.getAssignments() (the legacy local spool_assignments table) and looked up by a.spool_id === spool.id. In Spoolman mode the slot assignment lives in the separate spoolman_slot_assignments table, keyed by spoolman_spool_id — so the lookup always returned undefined, the button's disabled={isPending || !spoolAssignment} predicate stayed true forever, and unassignMutation was also pointing at the wrong API (unassignSpool instead of unassignSpoolmanSlot). Both the query and the mutation now branch on the existing spoolmanMode prop: Spoolman mode uses getSpoolmanSlotAssignments() + lookup by spoolman_spool_id + unassignSpoolmanSlot(spool.id) and invalidates the spoolman-slot-assignments-all / spoolman-slot-assignments query keys; local mode keeps the existing path unchanged. Two new regression tests in __tests__/components/SpoolFormModal.test.tsx (SpoolFormModal — Unassign button (#1336)): the button is enabled and clicking it calls unassignSpoolmanSlot(42) when a matching spoolman_slot_assignment exists, and the button stays disabled (no unassignSpool fallback) when no assignment exists. All 12 search-helper tests + 13 InventoryPage search tests + 27 SpoolFormModal tests pass; frontend build clean.
  • Spoolman auto-create no longer labels Bambu Lab RFID spools with competitor names like "3DXTECH™ Black" (#1309, PR #1330 by ojimpo) — When Bambuddy auto-created a Spoolman filament entry for a Bambu Lab RFID spool, the second-stage lookup against Spoolman's external library (GET /api/v1/external/filament, served from SpoolmanDB) matched on material + color_hex only — there was no manufacturer / vendor filter. The catalog is multi-vendor and roughly ID-sorted: for PLA + #000000 (black) it contains 64 entries, with the first hit being 3djake_pla_black_1000_175_n (3DJAKE), the third being 3dxtech_pla_carbonxcarbonfiberblack_500_175_p (3DXTECH, name CarbonX™ Carbon Fiber Black), and the actual bambulab_pla_black_1000_175_n not surfacing until position 15. Bambuddy therefore created the filament under the Bambu Lab vendor but labeled it with a competitor's product name. Real-world observations in production: Bambu Lab ABS Black created as 3DXTECH™ Black, Bambu Lab PLA Support picked the adjacent / wrong variant instead of bambulab_pla_supportforpla/petgblack_500_175_n, and PLA Basic Black created as PLA (material, not PLA Basic). A secondary issue compounded this: _create_filament_from_external dropped the external entry's density field, so even when the correct entry was eventually picked the density got overwritten by create_filament's built-in PLA-default 1.24 fallback instead of the catalog's actual value (1.26 for PLA Basic, 1.31 for PETG, etc.). Fix in backend/app/services/spoolman.py::_find_or_create_filament: (1) the external-library loop now filters by manufacturer == "Bambu Lab" (case-insensitive, whitespace-trimmed), with a defensive id.startswith("bambulab_") fallback that handles entries where the manufacturer field is missing or has drifted in a future SpoolmanDB schema. (2) When multiple Bambu Lab candidates match the same material + color_hex, the function prefers the entry whose name equals the AMS tray_sub_brands (lowercase+strip comparison) so the more specific variant wins — PLA Basic over generic Black, Support for PLA/PETG Black over generic Black, etc. (3) _create_filament_from_external now propagates external.get("density") through to create_filament; when the catalog entry has no density set, the existing material-table fallback inside create_filament still kicks in via the if density is None branch at line 321 — no path lost. Behavioural caveat the user needs to know: previously-created mis-named filaments are NOT auto-renamed by this fix. Step 1 of _find_or_create_filament is the internal-Spoolman-filament loop that short-circuits on (vendor == "Bambu Lab", material, color_hex) — and that path is unchanged. Any Bambu Lab filament created by an older Bambuddy build (or hand-edited by the user) will continue to be matched and reused on subsequent AMS reads, regardless of how wrong its name is. To pick up the corrected name, the user has to delete the mis-named filament in Spoolman once — then the next AMS read for the same material+color falls through to the external-library step and creates a new entry with the correct Bambu Lab name. This is deliberate: some users may have intentionally renamed Bambu Lab filaments (e.g. to follow their own naming convention or to merge variants) and a silent auto-rename would undo that. Regression tests in test_spoolman_service.py::TestFindOrCreateFilament (6 new): internal short-circuit preserves the existing match without touching the external library, non-Bambu-Lab external entries are skipped even when they sort first in SpoolmanDB, PLA Basic wins over generic Black via the tray_sub_brands tiebreaker (per maintainer request on #1309), no-match-anywhere falls back to tray_sub_brands or tray_type instead of leaking a competitor name into the create call, id.startswith("bambulab_") accepts entries with absent manufacturer field, and density propagates end-to-end through the public method instead of getting clobbered by the material-default. All 44 tests in test_spoolman_service.py pass; ruff clean. Reported and patched by ojimpo.
  • Safety: bed-jog Z direction was inverted on A1 / A1 Mini — "Up" rammed the nozzle into the bed (#1334, reported by william.filipcicgmail.com) — On A1 / A1 Mini, clicking the "Up" arrow on the printer-card bed-jog control would send the nozzle straight into the build plate. Reporter triggered it with the 50 mm step and crashed their nozzle. Root cause: the bed-jog UI was designed against the X1 / P1 / H2 family's bed-on-Z convention. On those printers the bed is the Z-axis, Bambu's firmware homes Z=0 at the top, and G1 Z- raises the bed toward the toolhead (decreases the nozzle-bed gap). The frontend maps "Up" to negative distance with that convention in mind. A1 / A1 Mini are bed-slingers: the bed moves on Y, the toolhead moves on X+Z, and the firmware uses standard cartesian Z (Z+ = toolhead up). On those models G1 Z-10 drives the toolhead down 10 mm — straight through any clearance the user had — which is exactly what the reporter saw. There was no model classification at the bed-jog code path; every printer got the same X1-convention G-code. Fix: new is_bed_slinger(model) helper at backend/app/services/printer_manager.py (sibling to existing supports_chamber_temp / has_stg_cur_idle_bug, reuses the already-defined A1_MODELS frozenset which covers display names and internal codes N1 / N2S). The bed-jog route at backend/app/api/routes/printers.py:2710 now inverts the signed distance before emitting the G-code when the printer model is in that set, so the UI "Up" semantics ("decrease nozzle-bed gap") stay consistent regardless of which physical part moves on the printer. Frontend stays untouched — single source of truth for the direction logic lives in the backend, keyed off the printer's model column, so any future bed-slinger Bambu model only needs one frozenset update. The route's Query description and docstring now state the new contract explicitly: distance is the gap adjustment, not the raw Z value, and the backend translates per model. Regression tests: 13 in test_bed_jog.py::TestBedJogAPI — 6 parametrised cases prove bed-on-Z models (X1C / P1S / H2D / H2S / H2C / P2S) still emit G1 Z-10.00 for a UI "Up" click (pass-through), 6 parametrised cases prove A1 / A1 Mini / A1MINI / A1-MINI / N1 / N2S emit G1 Z10.00 instead (inverted, toolhead up), plus 1 symmetric "Down arrow drops the toolhead via G1 Z-" case. 5 in test_printer_manager.py::TestIsBedSlinger pin the helper's classification contract — A1 family true, every bed-on-Z model false, None / empty-string safe, case-insensitive. Safety note: if you own an A1 or A1 Mini and were running any 0.2.x build before this release, do not use the printer-card bed-jog buttons — they will move the toolhead in the wrong direction. The Z controls in Bambu Studio / Bambu Handy are unaffected (they generate their own model-aware G-code).
  • Spoolman inventory: editing a spool's color name no longer "reverts" to the subtype on save (#1319, reported by MartinNYHC) — On Spoolman-backed inventory, changing a spool's color name in the edit dialog appeared to accept the new value but the inventory list column and the next edit-dialog open showed it back to the subtype string. Three layers stacked on top of each other to produce this: (1) find_or_create_filament at backend/app/services/spoolman.py:609 matches existing Spoolman filaments by material / name / color_hex / vendorcolor_name is intentionally not part of the match key (Spoolman doesn't standardise the field and most installs leave it null) — but when a match was found it returned the existing filament's id unchanged, silently dropping the new color_name value. The write never reached Spoolman. (2) On re-read, the helper at _spoolman_helpers.py:279 falls back to subtype when filament.color_name is empty (without the fallback, Spoolman installs that don't fill the field would render every spool as "Unknown color"). The persisted value was still empty, so the read synthesised the column from subtype. (3) The edit form prefilled color_name from spool.color_name — which on Spoolman installs without color_name was the synth value (= subtype). If the user changed subtype but not color_name, the form silently round-tripped the OLD subtype back to Spoolman as if it were a real user-set color_name, which then started showing up as the persisted value on the next render — the exact "color reverts to subtype" pattern in the bug report. Fixes: (1) find_or_create_filament now patches the matched filament's color_name via the existing patch_filament PATCH wrapper when the request differs from what's stored. Convention on the parameter: None = "don't touch", "" = explicit clear (patches Spoolman to null), any other string = set/update. (2) The PATCH route at spoolman_inventory.py:567 now uses Pydantic's model_fields_set to distinguish "field omitted" from "field explicitly set to null" — only the latter is a clear (mirrors the existing storage_location pattern at the same site). (3) The map helper now also returns color_name_is_synthesized: bool on every inventory record, and SpoolFormModal.tsx checks it on prefill so the input starts blank when the value was synthesised from subtype — the user sees the real stored state and can't accidentally round-trip the synth value back. The read-side fallback is kept on purpose (the list-display "Unknown color" problem hasn't gone away — it's just that the form no longer treats the fallback as a real value). A patch_filament failure is caught and logged but doesn't block the match — the spool still links to the correct filament, only the colour-name update is dropped, which is the safer failure mode. Regression tests: 5 in test_spoolman_inventory_methods.py::TestFindOrCreateFilament — patch-on-change, no-patch-when-unchanged, no-patch-when-None, clear-when-""-passed, and patch-failure-still-returns-match-id. 2 in test_spoolman_inventory_helpers.py::TestMapSpoolmanSpoolcolor_name_is_synthesized flag is False when a real value is stored, True when the fallback fires. 2 integration tests in test_spoolman_inventory_api.py — wire-level color_name=null clears (route translates to ""), and color_name omitted from the PATCH body keeps the current value (route passes None). All 564 spoolman-tagged tests pass; ruff clean; frontend build clean.
  • Deleting an SSO user left orphan OIDC/MFA/camera-token rows on SQLite — blocked re-login and leaked auth state (#1285, PR #1295 by netscout2001) — On SQLite (default deployment) the delete_user route left orphan rows in user_oidc_links, user_totp, user_otp_codes, and long_lived_tokens because the project intentionally runs with PRAGMA foreign_keys=OFF, so the ON DELETE CASCADE declared on those tables never fired. Reported symptom: an admin deleted an OIDC-provisioned user, the user tried to re-login via SSO, the OIDC callback found the orphan UserOIDCLink pointing at the (now missing) user, failed to resolve it, and redirected to account_inactive instead of triggering auto_create_users. The same root cause was leaking MFA secrets (user_totp), pending email OTP codes (user_otp_codes), and per-user camera-stream tokens (long_lived_tokensverify() would happily match by lookup_prefix even after the owning user was gone). PostgreSQL deployments were unaffected — cascade was firing there. Fix: mirrors the existing APIKey cleanup pattern in delete_user (introduced in PR #1182). backend/app/api/routes/users.py:delete_user now explicitly deletes UserOIDCLink, UserTOTP, UserOTPCode, and LongLivedToken rows owned by the user; also folds in PrintBatch.created_by_id cleanup (same ondelete=SET NULL SQLite-FK-off root cause, the SET NULL block at users.py:393-407 was missing it). backend/app/core/database.py:run_migrations gains an idempotent startup orphan-cleanup that sweeps the four auth tables (DELETE FROM <table> WHERE user_id NOT IN (SELECT id FROM users)), wrapped in begin_nested(), logged at INFO only when rows actually drop — so installations carrying orphans from before the fix are healed automatically without manual DB intervention. No-op on Postgres (cascade already fired) and idempotent on SQLite (second run finds nothing). backend/app/api/routes/mfa.py:list_oidc_links returns "<deleted>" for provider_name when link.provider is null instead of raising AttributeError — covers the symmetric edge case where a UserOIDCLink could reference an orphaned provider. Tests: 14 new/extended. test_users_auth_cleanup.py (new): 5 tests verify delete_user removes OIDC/TOTP/OTP/long-lived-token rows individually + combined-cleanup atomically. test_oidc_relogin.py (new): full end-to-end test reproducing the #1285 symptom — mocked IdP, first OIDC login, admin delete, second OIDC login proves auto_create_users fires again (and pinned the regression boundary by confirming the test fails without the fix). test_orphan_auth_cleanup_migration.py (new): 7 tests for per-table cleanup across all four auth tables, idempotency, no-op on fresh install, and survival of rows belonging to real users. test_mfa_api.py adds TestListOidcLinksDefensiveProviderNull for the null-check. test_auth_api.py::test_delete_user extended to assert all five auth-table side effects (UserOIDCLink, UserTOTP, UserOTPCode, APIKey, LongLivedToken). All 13 PR-added tests + 194 tests in extended files pass; ruff clean. Reported and patched by netscout2001.
  • Slicer bundle import 400/502/503 errors now land in the log so support bundles tell us why (#1312, reported by hasmar04) — Reporter hit 400 Bad Request from POST /api/v1/slicer/bundles when uploading a Bambu Studio Printer Preset Bundle (.bbscfg); a second contributor had reported the same shape the day before. Same bundle file uploaded fine on Martin's dev machine, which strongly points at sidecar-side differences (image version, write permissions on DATA_PATH/bundles, TrueNAS Docker volume perms, etc.) — but triage was blocked because the sidecar's actual reject reason only made it as far as the FE toast. Bambuddy logged just the uvicorn-access line (POST /api/v1/slicer/bundles HTTP/1.1 400), with no detail in the support bundle. The route at backend/app/api/routes/slicer_presets.py:import_slicer_bundle now emits a logger.warning for each of the three failure shapes: 400 (SlicerInputError) — sidecar's reject string is logged alongside the filename and byte count, so we can see "bundle rejected because manifest.json is missing" in the next support bundle without asking the reporter to copy the toast text. 503 (SlicerApiUnavailableError) — logs the configured sidecar URL plus the exception detail (separates "URL wrong" from "sidecar offline"). 502 (SlicerApiError) — logs filename + byte count + error string, useful when the sidecar's DATA_PATH/bundles write fails (the typical 5xx cause on this path). The 400 case is WARNING rather than INFO deliberately — it's an unexpected end-user-visible failure, not a routine event. Existing test_import_bundle_sidecar_400_passes_through now also asserts the reject reason AND the filename appear in caplog, so the support-bundle-includes-the-diagnostic contract is pinned. Doesn't fix #1312's actual root cause (sidecar-side, still under investigation with reporter) — but the next reporter we get on this code path will produce a bundle that contains the answer.
  • Restarting Bambuddy mid-print triggered plate-check pause + duplicate archive (#1304, reported by kleinwareio) — When a P1S print was in progress and the user updated the Bambuddy container (latestdaily in the report, but the same path fires on any restart), Bambuddy paused the live print with an "Object detected on build plate" warning AND re-archived the in-progress file as a duplicate. Root cause: the print-start detector at backend/app/services/bambu_mqtt.py:2780 gated on self._previous_gcode_state != "RUNNING", which is true whether we just saw IDLE→RUNNING (a real print start) OR we just constructed a fresh BambuMQTTClient and _previous_gcode_state is still its initial None (catch-up push from a printer already running). The fresh-client case fired on_print_start, which downstream ran the plate-detection-and-pause flow at main.py AND the FTP-download-and-archive flow — exactly the two symptoms in the bug report. Fix: added self._previous_gcode_state is not None to the is_new_print guard, so the first push from the printer in a new process lifetime never counts as a state transition into RUNNING. _was_running still flips to True via the unconditional "Track RUNNING state" block at bambu_mqtt.py:2795, so print-completion detection keeps working — only the start callback is suppressed. Three existing tests that asserted on the old (buggy) behavior were updated to seed _previous_gcode_state = "IDLE" first, matching the realistic lifecycle of a print actually starting (Bambuddy has been observing IDLE/FINISH before RUNNING); they now exercise the correct path. New regression test test_first_running_push_after_bambuddy_restart_does_not_fire_print_start pins the contract for the reporter's exact scenario — and asserts that _was_running still becomes True so completion still fires when the print ends. The is_file_change branch was unaffected (it already required _previous_gcode_file is not None, so restart-catch-up never reached it anyway).
  • Create User form rejected weak passwords with an opaque "HTTP 422" toast (#1303, reported by TrickShotMLG02) — Three independent UX gaps stacked on top of each other. (1) Discoverability: the Create User and Edit User modals showed no hint about the backend's password complexity requirements (min 8 chars + uppercase + lowercase + digit + special character; enforced in backend/app/schemas/auth.py:_validate_password_complexity). Reporter typed an 8-character all-digits password and had no way to know why it failed. (2) Validation mismatch: the frontend's pre-submit check at SettingsPage.tsx was only password.length < 6, accepting passwords the backend would reject — every weak password got bounced after the round-trip instead of getting blocked locally. (3) Error display fragility: when the backend returned a 422 with a Pydantic detail array, the API client's error parser at frontend/src/api/client.ts:107 could fall through to the bare HTTP ${status} fallback if the mapped/filtered detail array ended up empty after stripping the "Value error, " prefix — masking the real reason as just "HTTP 422". Fixes: (1) added a passwordRequirements helper line under both password inputs in Create User / Edit User; (2) extracted checkPasswordComplexity into frontend/src/utils/password.ts, called from handleCreateUser and handleUpdateUser before the API request — it returns the same FIRST failing rule the backend's validator would have flagged (uppercase before lowercase before digit before special, matching _validate_password_complexity's order — fixing one rule shouldn't immediately trip a different message), and the submit button is disabled until all rules pass; (3) the API client now falls back to JSON.stringify(detail) when the mapped array is empty, so a malformed but non-empty 422 detail surfaces SOMETHING informative instead of a bare status code. New translation keys settings.passwordRequirements, settings.toast.passwordNeeds{Uppercase, Lowercase, Digit, Special}, plus the existing passwordTooShort text updated from "6 characters" to "8 characters". English + German fully translated (German reporter's locale); FR/IT/PT-BR translated using straightforward equivalents; JA/ZH-CN/ZH-TW seeded with English for the new complexity messages (existing project flow for new strings). 7 new unit tests in frontend/src/__tests__/utils/password.test.ts pin the validator's contract, including the reporter's exact "12345678" input which now produces a local "Password must contain at least one uppercase letter" toast instead of a 422 round-trip.
  • External NAS scan hung forever and never committed subdirectories (#1299, reported by joeferrante) — Linking an external mount with ~1200 subdirectories caused the "Link External Folder" modal to spin until the FE gave up, after which the mount appeared in the sidebar but with no subdirectories, and subsequent scans had no effect either. The reporter's support bundle pinpointed two compounding problems. (1) TypeError: unsupported operand type(s) for /: 'str' and 'str' on every STL — 1,606 instances in the log. generate_stl_thumbnail at stl_thumbnail.py:119 does thumbnails_dir / thumb_filename, which requires a Path, but the external-scan call site at library.py:1256 passed both arguments as str (generate_stl_thumbnail(str(filepath), str(thumb_dir))). Every STL crashed inside the try/except and got logged at WARNING level — visible spam but more importantly wasted work (trimesh.load() and matplotlib setup ran before the failing division). Fix: defensive Path() coerce at the top of generate_stl_thumbnail so the function works regardless of how callers pass args. Regression test test_string_arguments_accepted_without_typeerror pins the contract. (2) Scan ran STL thumbnail generation synchronously inside the HTTP request — even after fix (1), trimesh.load() + matplotlib render is 1–5 seconds per STL; on a NAS with thousands of STLs that's hours of work blocking the modal. Frontend would time out, user would refresh, the HTTP request would be cancelled, db.commit() at library.py:1331 would never run, and no folder/file rows would be committed — which is exactly why "subsequent scans have no effect" (each retry started from scratch and hit the same wall). Fix: scan now defers STL thumbnails to a background task. After db.commit(), the route spawns asyncio.create_task(_backfill_external_stl_thumbnails(folder_ids)) with the full set of folder IDs from folder_cache.values() (covers both pre-existing subfolders AND the ones created during this scan — all_folder_ids is snapshotted before the walk and would have missed the new ones), then returns immediately. The background task opens its own async_session, walks every STL file with thumbnail_path IS NULL in the linked folder tree, generates each thumbnail, and commits per-file so a server restart mid-run only loses the in-flight thumbnail. Survives FE refresh because the task lives in the FastAPI event loop, not the request scope. The reporter's smaller mount (/mnt/NAS_3d_files/3mf_Files, 4 subdirectories) used to work because it completed inside the FE timeout window — with this fix, the 1200-subdir parent mount completes equally fast and thumbnails fill in over the following minutes. Auto-scan after create unchanged: FileManagerPage.tsx:1147-1151 still calls scanExternalFolder immediately after createExternalFolder, which is correct UX — what changed is that the scan response now arrives in seconds instead of timing out.
  • MakerWorld "Open Cloud settings" link landed on the wrong page (#1300) — On the MakerWorld page, the "Open Cloud settings" hyperlink shown in the sign-in-required banner (when no Bambu Cloud token is stored) pointed at /settings?tab=cloud. The Settings page has no cloud tab (its tabs are general/plugs/notifications/queue/filament/network/apikeys/virtual-printer/spoolbuddy/failure-detection/users/backup), so the URL-param check at SettingsPage.tsx:179 (validTabs.includes(tabParam) ? tabParam : 'general') silently fell back to the General tab. The Bambu Cloud login UI actually lives on the Profiles page (/profiles), which already defaults its sub-tab to cloud — the same destination the existing backup.cloudLoginRequired i18n string ("Sign in under Profiles → Cloud Profiles…") documents. One-line fix in MakerworldPage.tsx:438: to="/settings?tab=cloud"to="/profiles". The Profiles page's useState<ProfileTab>('cloud') (line 2822) means no query param is needed — landing on /profiles opens the Cloud sub-tab directly.
  • External-spool prints no longer credit usage to AMS slot 0's Spoolman spool (#1276, reported and diagnosed by ojimpo — regression of #853) — On a single-filament external-spool print (TPU loaded in vir_slot id=254 on the reporter's H2S + AMS 2 Pro), _resolve_global_tray_id in spoolman_tracking.py was crediting the usage to whatever Spoolman spool happened to be linked to AMS slot 0 — a completely unrelated material in the reporter's case. ~48.94 g of TPU was credited to a PLA spool across 4 prints before they noticed. Root cause: BambuStudio encodes virtual tray IDs (254/255) as -1 in the flat ams_mapping array it sends to the printer (a convention already documented in bambu_mqtt.py:start_print()), but the spoolman tracking helper was treating -1 as "unmapped → use position-based default" and the default mapped slot_id=1global_tray_id=0. When slot_to_tray[slot_id-1] == -1 and ams_trays contains an external slot (254 or 255), the helper now returns the external tray ID directly, matching the convention start_print() uses on the other side of the pipeline. Prefers 254 over 255 (consistent with single-nozzle tray_now reporting and the vir_slot id=255→254 remap in bambu_mqtt.py:864). Legacy behavior preserved when ams_trays is empty or contains no external slot (callers that don't pass ams_trays keep the position-based fallback). Two regression tests cover the reporter's exact scenario (ams_trays={0,1,2,3,254}, slot_to_tray=[-1] → 254) plus the H2D-deputy case and the fall-through-when-no-external case. Root cause investigation and patch by ojimpo.
  • Virtual-printer queue mode now honors workflow default print options (#1235, reported by jc21, root cause and patch by jc21 in #1277) — Prints sent from Bambu Studio (or any slicer) to a VP in print_queue mode arrived in the queue with bed_levelling, flow_cali, vibration_cali, layer_inspect, and timelapse set to the SQLAlchemy column-level defaults, never the user's workflow preferences. The reporter happened to have every workflow default set to the opposite of the column defaults, so prints appeared to have all five options inverted; every queue item required hand-editing before dispatch. The manual POST /print-queue/ endpoint reads these fields off the request body (the frontend pulls them from settings before submitting), but the VP-FTP-receive path at backend/app/services/virtual_printer/manager.py:_add_to_print_queue constructed PrintQueueItem without touching them at all — SQLAlchemy then filled in bed_levelling=True, flow_cali=False, vibration_cali=True, layer_inspect=False, timelapse=False regardless of what was in the DB. Fix reads default_bed_levelling / default_flow_cali / default_vibration_cali / default_layer_inspect / default_timelapse via the existing get_setting() helper (same pattern already used in the function for virtual_printer_archive_name_source) and passes them explicitly to PrintQueueItem. A small _bool_setting() helper maps None → AppSettings schema default, so a fresh install with no workflow page customization behaves identically to before. Regression tests: test_add_to_print_queue_uses_workflow_defaults_from_settings (verifies all five settings flow through with values opposite to the column defaults, matching the reporter's exact scenario) and test_add_to_print_queue_falls_back_to_schema_defaults_when_unset (verifies the no-DB-row path).
  • Linking a Spoolman spool to an AMS-HT slot no longer fails with a CHECK constraint error (#1274, reported by guillaume.houba) — On H2C / H2D, AMS-HT units report ams_id 128+ (one ams_id per unit, single tray). The spoolman_slot_assignments table's ck_ams_id_range constraint only allowed 0-7 (standard AMS) or 255 (external), so the upsert on POST /spoolman/inventory/slot-assignments blew up with IntegrityError: CHECK constraint failed: ck_ams_id_range and the user had no way to link any spool to an AMS-HT slot. Widened the constraint formula to (ams_id >= 0 AND ams_id <= 7) OR (ams_id >= 128 AND ams_id <= 191) OR ams_id = 255 — matches the value range the internal spool_assignment table already accepts and leaves room for up to 64 AMS-HT units (the existing bambu_mqtt/usage-tracker code uses the same 128-based addressing). Updated in the ORM model (models/spoolman_slot_assignment.py) and both the SQLite/Postgres CREATE TABLE DDL in core/database.py. New idempotent migration _migrate_widen_spoolman_slot_ams_id_range: Postgres path runs DROP CONSTRAINT IF EXISTS + ADD CONSTRAINT (no data risk — the new formula is strictly wider than the old); SQLite path detects the stale formula in sqlite_master, table-rebuilds via the standard _v2 rename pattern used elsewhere in this file (_migrate_update_auto_link_constraint at database.py:418), and leaves pre-constraint legacy tables untouched. Tests: test_ams_id_check_admits_ams_ht_range (ORM + DDL formula) and test_assign_accepts_ams_ht_id (end-to-end POST /slot-assignments with ams_id=128).
  • X2D live camera stream no longer cut by Obico polling / snapshot capture (#1271, reported by clabeuhtegrite) — The MJPEG fan-out broadcaster from #1089 lets multiple browser viewers share one upstream RTSP socket per printer, but internal callers (Obico AI polling at the user's configured obico_poll_interval, and the manual /camera/snapshot endpoint) still opened their own fresh RTSP connections. X1C / H2D / P2S firmware tolerates brief concurrent camera sockets so the gap was invisible there. X2D firmware 01.01.00.00 (and likely future firmwares) enforces strict single-camera-connection more aggressively: every Obico poll (default every 5 s) kicked the live stream, the broadcaster paid the multi-second RTSP handshake to reconnect, and the user saw the stream cut "all the time." New helper try_get_active_buffered_frame(printer_id) at api/routes/camera.py:74 returns the broadcaster's last buffered frame (always <1 s old while any viewer is connected) and None when no viewer is active. Obico's _capture_frame and the /camera/snapshot endpoint check it first and only fall through to a fresh socket when no stream is running — preserving today's behavior when nobody is watching. plate_detection and layer_timelapse deliberately not converted: plate-detection needs guaranteed-fresh frames post-print (false-positive risk if the user already grabbed the print in the same second), and layer-timelapse is for external cameras only. Regression tests: test_camera_snapshot_reuses_buffered_frame_when_stream_active and two TestCaptureFrameSharesBroadcasterUpstream Obico tests.
  • Usage tracker: spool swaps in UNUSED slots mid-print no longer charge the old spool (#1269, reported by maugsburger) — Path 2 of the usage tracker (AMS remain% delta fallback) iterated every AMS tray that had a remain% delta, even slots the print never touched. When a user swapped spools in an unrelated slot during a print, the new spool reports remain=0 (no RFID tag yet) while the snapshot from print-start was 100%, so the fallback charged the originally-assigned spool the full 1000 g. Reporter's case: single-filament print on AMS0-T3 (ams_mapping=[3]), swapped a spool in T1 and another in T2 to refill while the print continued — wound up with Spool 27 consumed 1000.0g (100%) on printer 1 AMS0-T1 and Spool 24 consumed 170.0g (17%) on printer 1 AMS0-T2, neither of which were ever in the print. Fix: the fallback now builds print_used_keys from session.ams_mapping, state.tray_change_log, and session.tray_now_at_start (the three runtime signals telling us which trays were actually part of the print), converts each global tray ID to (ams_id, tray_id) using the standard convention (254/255 → external, ≥128 → AMS-HT, otherwise id // 4, id % 4), and skips fallback for trays whose key is not in that set. When all three signals are empty (legacy edge case: no slicer push, no MQTT tray-change events, no tray_now at start) the legacy "scan every tray" behavior is preserved so we don't regress prints with no metadata. Regression test in test_usage_tracker.py::test_skips_fallback_for_trays_outside_print_mapping reproduces the reporter's exact scenario.
  • Printer card: smart-plug live wattage now rounded to whole watts (#1266, reported by Carter3DP) — The printer card's smart-plug status badge rendered plugStatus.energy.power raw, so plugs that report fractional watts (Kauf PLF12 via ESPHome / Home Assistant in the reporter's case, but any MQTT plug pushing a float can hit this) showed values like 14.123456789012 W and overflowed the card width. SmartPlugCard and SwitchbarPopover already wrapped the same field in Math.round(); only the printer-card badge was missing the round. Single-line fix at frontend/src/pages/PrintersPage.tsx:4569.

Added

  • Build-plate icon on archive cards + uniform printer/model line (#1253, reported by tonygauderman) — Archive cards now show an OrcaSlicer-style bed icon in the printer/model row indicating which build plate the print was sliced for (Cool / Cool SuperTack / Engineering / High Temp / Textured PEI / Smooth PEI), with the full plate name in the hover tooltip. Closes the gap where users had to remember which plate matched a re-print or open the source 3MF in a slicer just to read the bed setting. Card row also unified: archives with a real Bambuddy-printer association used to render as H2D-1 GCODE … while slicer-only uploads rendered as Sliced for X1C GCODE … — same line, two different shapes. Dropped the Sliced for prefix so both render as a uniform <name-or-model> [bed-icon] GCODE <hash> row, scanning the same regardless of provenance. Backend: new bed_type column on print_archives (idempotent ALTER TABLE migration; SQLite + Postgres safe), populated from curr_bed_type in Metadata/slice_info.config (per-plate metadata, the authoritative source — that's the bed type that actually got sent to the printer for the exported plate) with a fallback to Metadata/project_settings.config's top-level curr_bed_type for older 3MF shapes. Wired through both code paths that produce archive responses: archive_to_response() (the hand-rolled dict converter at archives.py:97 — easy to miss, the schema-only change is silently dropped by Pydantic since the route bypasses from_attributes) and the /rescan endpoint, so old archives can be re-parsed by the user via the existing per-archive Rescan button. Newly-ingested archives get the value automatically. Backfill script: scripts/backfill_archive_bed_type.py (with --dry-run) re-opens every NULL archive's 3MF on disk and populates the column — opt-in for users who want their entire history covered without waiting for natural turnover. Auto-loads .env from project root before importing backend modules (since core/config.py:52 reads DATABASE_URL from os.environ at import time, not from pydantic-settings at Settings() time), prints the resolved DB URL with credentials redacted on every run so operators can confirm they're hitting the intended database (Postgres / SQLite — Bambuddy supports both per #1219's DATABASE_URL pathway), and calls init_db() itself before querying so the migration applies even if the script is run against a database the backend hasn't touched yet. Frontend: 6 OrcaSlicer-style PNGs ship in frontend/public/img/bed/ (under /img/ because that path was already statically mounted at main.py:5244; the /bed-icons/ toplevel attempted first hit the SPA catch-all and returned index.html as text/html, which the browser then rendered nothing for). New utils/bedType.ts maps slicer strings (case-insensitive) to icon + human-readable label; covers Bambu Studio and OrcaSlicer's diverging spellings for the same physical plate (e.g. Cool PlatePC Plate, Cool Plate (SuperTack)Supertack PlateBambu Cool Plate SuperTack). Renders on both card-grid view and list view in ArchivesPage.tsx. Unmapped or NULL bed_type simply omits the icon, so cards stay clean for archives created before this change. Note on icon mapping: bed_pei.png → Textured PEI, bed_pei_cool.png → Smooth PEI is a best-guess from the OrcaSlicer asset names — swap the two paths in bedType.ts if a future user reports the icons reversed for their plate.
  • Spool labels: new 40×30 mm template, hex colour code, bolder brand line (#809 follow-up, requested by oliboehm) — Three small enhancements to the spool-label printer rolled into one change. (1) New box_40x30 template — 40×30 mm single label, common DK/Brother roll size. Added to _SINGLE_LABEL_SIZES_MM in backend/app/services/label_renderer.py and to the request body's Literal[...] enum in backend/app/api/routes/labels.py; height is ≥ 20 mm so it routes through the existing roomy layout (swatch + QR + full text column). (2) Colour hex code on every label — new _hex_code_label() helper formats data.rgba as #RRGGBB (alpha-stripped, uppercased to match the inventory UI's colour-picker convention) and returns "" for missing/malformed input so the caller skips drawing instead of throwing. Rendered as a small line under the material/subtype line in the roomy layout, and as a third line above the spool ID in the tight (AMS) layout — useful when several near-identical material/colour spools sit next to each other in the AMS or on a shelf. (3) Brand line bigger + bold — the brand on every label now renders in Helvetica-Bold instead of Helvetica regular, with size bumped 5.5pt → 6.5pt on the tight layout and 7pt → 8pt on the roomy layout, so it's the most legible non-ID field at arm's length. Wiring: SpoolLabelTemplate union in frontend/src/api/client.ts extended with 'box_40x30'; LabelTemplatePickerModal gets a new TEMPLATE_OPTIONS entry for it; inventory.labels.templates.box40x30.{label,hint} keys added across all 8 locales (en + de fully translated, fr/it/ja/pt-BR/zh-CN/zh-TW translated to native, with the existing per-key fallback in the modal as a safety net). The 5-template grid still wraps to 2 columns on small viewports per #1230's fix; modal regression test was widened from 4 to 5 template buttons. Tests: ALL_TEMPLATES parametrize tuple in test_label_renderer.py extended with box_40x30 so all 7 generic invariants (PDF header, empty-input, multi-colour, missing-fields, malformed-rgba, long strings, sheet pagination) cover the new template; new test_hex_color_code_rendered_when_rgba_set (asserts #F5E6D3 appears in the uncompressed PDF for both 40×30 and 62×29), test_hex_color_code_skipped_when_rgba_invalid (regex pin: no #RRGGBB shape on the label when rgba is malformed, except the spool ID's #42), and test_brand_rendered_in_bold_per_809_followup (asserts Helvetica-Bold font reference is in the PDF — caught a regression if the brand line ever reverts to regular weight). All 33 backend tests + 15 frontend modal tests pass; ruff clean.
  • Copy spool — duplicate any spool's settings into a fresh inventory row in two clicks (#1234, PR #1246 by MiguelAngelLV) — Adds a copy button (Copy icon) next to the existing edit button on every spool in the inventory page across all three views (table row, card, grouped table inner row). Clicking it opens the existing SpoolFormModal pre-filled with every field from the source spool — material, brand, color, slicer preset, label/core/cost, K-profiles, all of it — except weight_used which is reset to 0 (since the new spool starts full) and the RFID identity fields (tag_uid, tray_uuid, tag_type, data_origin) which aren't part of the form payload anyway, so the new spool is its own physical roll. Save calls api.createSpool (or api.createSpoolmanInventorySpool in Spoolman mode — both inherit the dispatch routing for free). Closes the long-running gap where users with many near-identical spools (e.g. five 1 kg PETG-CF rolls bought in a single order) had to re-enter every field from scratch on each one. Implementation shape: SpoolFormModalProps.mode: 'create' | 'edit' | 'copy' (exported as SpoolFormMode) replaces the previous isEditing = !!spool heuristic — every existing call site in InventoryPage.tsx was updated to pass the explicit mode, and the modal's title / submit-button label / weight-reset gate / submit-route branching all key on mode directly. The onCopy callback is optional on SpoolCard, SpoolTableRow, and SpoolTableGroup (matches the existing onPrintLabel? pattern), so the button is conditionally rendered and other consumers of those subcomponents don't get a copy affordance forced on them. Card-view and table-row buttons stop click propagation so clicking copy doesn't also fire the parent row's edit handler. Quick Add interaction: the Quick Add toggle is gated mode === 'create' (was !isEditing), so it stays out of copy mode — otherwise a user could enable Quick Add and bump quantity to N under the singular "Copy Spool" title and silently bulk-create N copies via bulkCreateMutation. i18n: new inventory.copySpool key across all 8 locales (en + de translated, fr/it/ja/pt-BR/zh-CN/zh-TW seeded with English fallback per project flow). Tests: 3 new in SpoolFormModal.test.tsx (SpoolFormModal copy mode describe block — title shows "Copy Spool", save calls createSpool not updateSpool, weight_used reset to 0 in the create payload when copying a spool with non-zero usage), 2 new in InventoryPageCopyButton.test.tsx (table-row copy button click → "Copy Spool" heading, cards-view copy button click → same heading after switching view modes) — guards against the three call sites drifting apart. Existing SpoolFormBulk.test.tsx and SpoolFormModal.test.tsx renders that omitted the mode prop were updated with the explicit mode="create" so the tightened Quick Add gate doesn't hide the toggle from them. Both InventoryPageCopyButton.test.tsx and InventoryPageDeepLink.test.tsx gained MSW handlers for the modal's open-time fetches (/api/v1/cloud/status, /api/v1/cloud/local-presets, /api/v1/cloud/builtin-filaments, /api/v1/inventory/color-catalog, /api/v1/inventory/spool-catalog, /api/v1/printers/) — without them MSW passes through to the real network, ECONNREFUSEs, and the rejected fetch resolves after the test environment is torn down, surfacing as a flaky "window is not defined" unhandled rejection in the modal's setLoadingCloudPresets(false) finally block (pre-existing flake hit ~1 in 3 full-suite runs at PR head).

Fixed

  • .bbscfg Printer Preset Bundle import was broken for every user since launch — sidecar compose file pointed at the wrong branch (#1312, reported by hasmar04, confirmed by netscout2001) — slicer-api/docker-compose.yml's build.context pointed at https://github.com/maziggy/orca-slicer-api.git#bambuddy/profile-resolver, but the POST /profiles/bundle endpoint plus the uploadBundle multer middleware were only ever committed to a sibling branch bambuddy/bundle-import (commit a3172c5, 2026-05-06). Every user who ran the documented docker compose up -d got a sidecar without the bundle endpoint — their POST /profiles/bundle fell through to the generic POST /profiles/:category handler, which either rejected with "Name cannot be empty" (no name form field sent) or "Invalid file type. Only JSON files are allowed." (the JSON multer filter rejecting the .bbscfg). Fix: bambuddy/bundle-import fast-forward-merged into bambuddy/profile-resolver in the orca-slicer-api repo and pushed, so the compose file's existing branch ref now points at the right commit. No Bambuddy code change. Existing users rebuild with cd slicer-api/ && docker compose --profile bambu build --no-cache --pull && docker compose --profile bambu up -d--pull is the key flag because BuildKit caches the git fetch context separately from layer caches, so --no-cache alone silently reuses the old branch checkout. New users on 0.2.5+ are unaffected. Lesson on diagnosis flow: the wrong root cause was reported twice during triage before the actual branch mismatch was caught — first as "build a week ago, before the bundle endpoint existed" (correct claim for the wrong branch), then as "rebuild with --pull" (still hit the same bug because the compose file pointed at the branch that never got the work). The reporter's third round of logs — the multer "Only JSON files are allowed" error string from upload.js:17, which only matches uploadJson not uploadBundle — was the smoking gun that no amount of rebuilding would help because the wired-up branch genuinely lacked the endpoint.

Changed

  • Support bundle records slicer-API CLI versions; wiki sidecar-update docs hardened (#1312 follow-up) — Triage scaffolding added during investigation of the bundle-import bug above. Useful independent of that fix: the next time a user reports a sidecar-related failure, the support bundle will identify which slicer CLI version is actually running without needing a manual curl /health. Backend: new _fetch_slicer_health(url) helper in backend/app/api/routes/support.py does a 2-second GET on <sidecar>/health, parses the JSON, and walks every non-dataPath key under checks looking for a version field — needed because the wrapper labels both bambu-studio-api and orca-slicer-api as checks.orcaslicer regardless of which CLI is actually bundled (cosmetic wrapper bug, not Bambuddy's). _collect_slicer_api_info now calls it instead of the bare reachability ping and adds two new fields per side to the integrations block: bambu_studio_version, orcaslicer_version. Captures "unknown" verbatim when the wrapper's --help regex didn't match (which is itself diagnostic). Behavior preserved on error paths: empty URL returns None, connection failure returns {reachable: False, version: None}, malformed/non-200 returns {reachable: True, version: None} so the reviewer can separate network failure from misconfiguration. Trailing-slash in the configured URL is stripped before appending /health. Tests: 9 new in TestFetchSlicerHealth; existing TestCollectSlicerApiInfo tests updated to patch _fetch_slicer_health and assert the new _version fields. All 62 helper tests pass; ruff clean. Docs: bambuddy-wiki/docs/features/slicer-api.md got four additions. (1) Quick Start gains a warning callout that the Compose file builds from a branch tip and a plain docker compose up -d will keep using the originally-built image. (2) The Updating section now recommends docker compose --profile bambu build --no-cache --pull (both flags) and explains why both matter. (3) New troubleshooting entry for the "Name cannot be empty" / "Only JSON files are allowed" .bbscfg import error. (4) New troubleshooting entry for the orphan-container conflict (container name "/bambu-studio-api" is already in use) that hits users whose existing containers were built from an older compose file with un-prefixed image tags. The pre-existing /health version: "unknown" entry also got a note clarifying that the wrapper mislabels the checks field as orcaslicer for both sidecars — both are cosmetic, not stale-image indicators.

Fixed

  • LDAP settings: "Advanced" collapsible section header was always rendering in English regardless of UI language (#1297, reported by Fuechslein) — LDAPSettings.tsx:352 calls t('settings.ldap.advanced') || 'Advanced', but the translation key was never defined in any locale file. The || 'Advanced' fallback kicked in and the header rendered as English in every language. Added settings.ldap.advanced to all 8 locales: Advanced (en), Erweitert (de), Avancé (fr), Avanzate (it), 詳細設定 (ja), Avançado (pt-BR), 高级 (zh-CN), 進階 (zh-TW). No component change needed — the fallback now never triggers because the key resolves properly. i18n parity check holds at 4754 leaves across all locales.

Changelog truncated — see the full CHANGELOG.md for the complete list.

Don't miss a new bambuddy release

NewReleases is sending notifications on new releases.